The Champions Trophy of 2017 was a topsy-turvy tournament. Teams won thumping victories one day, only to suffer big defeats the next. India beat Pakistan by 124 runs in the league match on June 4, and lost the final to them by 180 runs on June 18. Sri Lanka, who had beaten India in only four of their previous 21 meetings dating back to the 2011 World Cup final, comfortably chased down 322 with more than an over to spare - and did little else of note in the tournament. England came into the semi-final with a perfect record and were comprehensively thumped by Pakistan with 77 balls and eight wickets to spare.
The form book suggested that this would be one of the most open ICC tournaments ever. For much of the 2000s, such was Australia's dominance in world cricket that they started almost all ICC tournaments as overwhelming favourites. Since then, the absence of any one outstanding team has contributed to a sense of parity. The relative regularisation of playing conditions - pitches, boundary sizes, outfield quality, rules favouring the bat - has meant that playing in England or Australia is no longer as distinct from playing in India or Sri Lanka as it once was. This has also contributed to the sense of parity.
What stood out, however, was the complete absence of competitive finishes. In almost all games, by the 40th over of the second innings, it was clear that one of the two teams was in the ascendant. The results in the Champions Trophy are part of a larger trend in ODI cricket towards greater parity among teams and wider margins of victory in individual games.
Margin of victory
The question of margin of victory is a difficult one in ODI cricket. When the team batting first wins, it can do so by any margin, whereas when the team batting second wins, it does so by scoring one run more than the team batting first. While the team batting first approaches its innings with a view to scoring as many runs as it can in the allotted overs, the team batting second is working towards a target. So while a team chasing 220 could win in 39 overs or 48 overs, this is not necessarily a reflection of their potential score had they batted first.
Yet the fact remains that a competitive score in a 50-over ODI has shifted from about 220 in the early 1990s to about 280 in the mid-2010s. Chasing teams have been systematically more successful chasing higher totals, and setting teams have systematically achieved higher totals. To normalise the margin of victory in ODIs, the following method is used.
The goal of this normalisation is to describe the margin of victory (regardless of whether the team batting first or second is the winner) in terms of runs. Let's consider the example of the Pakistan v England semi-final on June 14.
England batted first and scored 211 all out. Since they were all out, this is counted as 211 runs scored in 300 balls faced. In their reply, Pakistan reached 215 for 2 in 223 balls. Extrapolating to 300 balls, this comes to 289 runs. So one way to describe Pakistan's margin of victory is that, across both innings, they emerged with a surplus of 78 runs.
What if Pakistan had scored 215 for 8 in 223 balls? In this case, the rate at which they lost those eight wickets would mean that they would have been bowled out in less than 300 balls. In such an instance, their expected total in 300 balls would be 269 (since 215 for 8 is 26.9 runs per wicket). Over the two innings, Pakistan's surplus would be 58 runs.
Normally, the margin of victory would have to be considered in terms of runs for winners who batted first, and a two-dimensional measure (wickets in hand and balls remaining) for winners who batted second. This approach effectively replaces the two-dimensional measurement with a run measurement. Even though this run measure is not directly comparable to the first-innings run measure, it is comparable to other second-innings run measures. In the comparison below, first and second innings are considered separately (and D/L-affected games are excluded).
Before presenting the findings about margins, here is an overview of how chasing teams have won since 1990. As one would expect, bigger scores mean that teams have to use up more wickets to reach them. Interestingly the success rate of chasing teams has improved from 49% in the 1990s to 52.5% in 2010s. Given the large number of games, this improvement is significant. Though targets have become stiffer, chasing teams have been successful more often, and they have become more used to using the lower order to reach them.
|Decade||0 to 3 wickets||4 to 6 wickets||7 to 9 wickets|
|1990s||179 (41%)||209 (47%)||52 (12%)|
|2000s||308 (42%)||310 (42%)||111 (16%)|
|2010s||190 (36%)||221 (42%)||117 (22%)|
The chart below shows the median margin of victory in each year since 1992 for games where the team batting first won (blue) and where the team batting second won (red). Note that this calculation only considers games involving the eight oldest Test nations. Bangladesh and Zimbabwe have been competitive at the top level at various times during this period, but not consistently.
The chart shows that the median margin of victory for teams batting first had increased, while the margin of victory for teams batting second has held steady between the 20- and 30-run mark. Twenty-eight per cent of games where the team batting first won in the 1992-2000 period were decided by a margin of 20 runs or fewer. In the 2010s, this has reduced to 18%. For teams batting second and winning, 51% of the games were decided by a margin of 20 runs or less in the 1992-2000 period. In the 2010s, this has dropped to 43%.
The data shows that close games, as understood in terms of margin of victory, have become less common as scoring rates have increased. This is despite the fact that chasing teams have been more successful, and teams have won chases with seven, eight or nine wickets down more frequently in the current decade than ever before. The average completed ODI involving the eight oldest Test nations in the 1990s lasted 545 balls. In the 2010s, this figure was down to 523 balls. The average number of wickets that fall has increased marginally, from 14.1 per match to 14.5.
The best explanation for this combination of data is that ODIs have become shootouts between rival batting line-ups. When one line-up comes good, it reaches a total that is usually beyond the opposition.
The table below shows the distribution of targets set by the team batting first (for matches selected as stipulated above) by decade. The share of totals under 225 has reduced from 52% in the 1990s to 28% in 2010s. The share of totals of 275 or more has increased from 12% in the 1990s to 41% in the 2010s. The share of totals in the 225-275 range has remained more or less the same (36% in the 1990s, 32% in 2010s). But the meaning of these totals has changed.
|1990s||199 (31%)||135 (21%)||125 (20%)||101 (16%)||80 (12%)||640|
|2000s||216 (25%)||106 (12%)||144 (17%)||145 (17%)||249 (29%)||860|
|2010s||88 (16%)||69 (12%)||81 (15%)||89 (16%)||225 (41%)||552|
In the 1990s, teams could compete with both bat and ball. This is to say, that they could get back in the game, or gain the ascendancy, either through a good spell of about ten to 12 overs by a pair of bowlers, or a good spell of about ten to 12 overs by a pair of batsmen. By the 2010s, the increased scoring rates have meant that bowlers are almost always playing total defence. It is now batsmen competing with batsmen. The number of ways in which teams can come back into games has been systematically reduced.
This new reality is also reflected in the way teams are composed. The 2010s have been marked by teams preferring to pack their XIs with allrounders and limiting themselves to two specialist bowlers.
Close observers of ODI cricket will not be surprised by the claim that the current era is an age of parity. The semi-finals of the last three major ICC ODI tournaments have featured eight different teams. Only West Indies (who won the last World T20) and Zimbabwe have failed to reach a semi-final during this period. This can be shown systematically using a method to calculate the dominance of cricket teams, which has been previously described in these pages.
The chart below shows the ratings of the ten oldest Full Member nations of the ICC immediately after every major ICC tournament since 1998. Since Ricky Ponting's outstanding Australians disbanded, some teams could be considered to be better than others, but there has been no single team that has been considered to be comparably complete in its dominance.
In the 2010s, for instance, Australia have been more dominant in ODIs at home than they were in the 2000s. Yet overall they are nowhere near as dominant in the 2010s as they were in the 2000s. The table below illustrates this idea of completeness. It considers matches among the eight oldest Test- playing teams in 2000s and 2010s at home, away and at neutral Venues. The record is shown in terms of wins per loss.
Ponting's Australians had the ability to win everywhere, against every type of opponent. Today, no team is comparably complete. Parity is the result of this lack of completeness.
The two ODI trends for the 2010s identified in this article so far - individual games becoming less competitive, and parity among teams - are related to each other. Teams are composed differently these days compared to how they were composed earlier. During Australia's period of dominance in the 2000s, they played more or less the same team home and abroad; what's more, they played the same type of team. Four wicket-taking bowlers and seven batsmen (having Adam Gilchrist helped). One or two of the seven batsmen could bowl quite well. Australia were dominant because no other side could match them when it came to the quality of their seven batsmen or their four bowlers in any playing conditions.
The parity across teams these days is a result of the increase in scoring rates, and scoring rates have increased because teams are picking XIs differently from the way Australia did under Ponting. Higher scoring rates mean greater risks. Batting risks tend to come off that little bit more frequently at home despite the relative regularisation of playing conditions. In the 2000s, the home team won 1.35 games per defeat. In the 2010s, this has increased to 1.75 games per defeat. Current parity is as much a result of the relative marginalisation of bowlers as because no team has an unusually brilliant group of players.
The increase in run rates, the reduction in the number of specialist wicket-taking bowlers, and the regularisation of playing conditions (relative to the 1990s and early 2000s), when taken together, point to conclusion that ODI games have increasingly become shootouts between rival batting line-ups, with bowlers being marginalised. If teams collapse and get bowled out from time to time, it is because the risk-taking has failed. The competitive par score is vanishing. First-innings scores seem to increasingly be either clearly below par or clearly above par.
In 2005, the ICC began a decade of experimentation with the Powerplay rules to solve the so-called "middle-overs problem", where teams used the middle overs of ODI innings to accumulate runs easily while they preserved resources for the slog. This problem has been resolved to a large extent. But the price has been the marginalisation of bowlers. Is this really what the ICC intended? In 2015, the ICC explained that bowlers would benefit from their most recent rule changes. Nearly two years on, the evidence for this remains weak. More changes will be necessary to turn ODI cricket back into the contest between bat and ball that it was in the 1990s.
Kartikeya Date writes at A Cricketing View. @cricketingview