# Measuring Team Stability

Australia's hectic ODI programme this year, involving 32 different players playing in 39 matches, set me wondering if there had ever been a higher "churn" of players for the team. It was easy to establish that the 39 matches was a record high for Australia in a calendar year, beating the 37 matches played in 1999, and never before had Australia fielded as many as 32 players in the same time span; the previous highest was 26 in 1997.

But the high number of matches played creates an expectation of a high number of players; it well may be that the team was more stable in 2009 than in 2008, when only 18 matches were played, but as many as 20 different players were used.

Attempting to quantify team stability from these statistics is not as straightforward as it may initially seem. One could perform a division and come up with 0.82 players per match for 2009, against 1.11 for 2008 but this is meaningless for those who understand that each team contains 11 players!

Even multiplying the number of matches by 11, and dividing by the number of players (13.4 for 2009, 9.9 for 2008), giving an average number of matches per player in the calendar year, is not really useful since it doesn't allow comparison between different years when different numbers of matches were played.

In the end, I decided to start with the premise that each calendar year commences with a match in which eleven players take part, and to then measure the changes that occurred subsequent to that first match.

So, for 2009, after the first match, we had another 21 players representing Australia in the next 38 matches, giving a stability index of 0.55 extra players per match. In 2008, it was another nine players in 17 matches, for an index of 0.53.

Doing this for Australian teams in each calendar year since 1979, when ODIs became an integral part of the cricket scene, gives the following results:

Year | Matches | Players | Stability Index |

1979 | 13 | 30 | 1.58 |

1980 | 9 | 23 | 1.50 |

1981 | 17 | 24 | 0.81 |

1982 | 15 | 20 | 0.64 |

1983 | 23 | 22 | 0.50 |

1984 | 22 | 21 | 0.48 |

1985 | 21 | 24 | 0.65 |

1986 | 23 | 18 | 0.32 |

1987 | 24 | 22 | 0.48 |

1988 | 15 | 18 | 0.50 |

1989 | 18 | 21 | 0.59 |

1990 | 23 | 18 | 0.32 |

1991 | 14 | 17 | 0.46 |

1992 | 21 | 19 | 0.40 |

1993 | 17 | 20 | 0.56 |

1994 | 30 | 23 | 0.41 |

1995 | 13 | 22 | 0.92 |

1996 | 26 | 20 | 0.36 |

1997 | 19 | 26 | 0.83 |

1998 | 25 | 23 | 0.50 |

1999 | 37 | 21 | 0.28 |

2000 | 23 | 17 | 0.27 |

2001 | 21 | 19 | 0.40 |

2002 | 29 | 21 | 0.36 |

2003 | 35 | 21 | 0.29 |

2004 | 26 | 20 | 0.36 |

2005 | 29 | 23 | 0.43 |

2006 | 29 | 23 | 0.43 |

2007 | 34 | 21 | 0.30 |

2008 | 18 | 20 | 0.53 |

2009 | 39 | 32 | 0.55 |

The table indicates that the Australian team has been less stable over the past couple of years than in the previous nine years, an era of great success for the team. There are significant spikes in the years preceding World Cups (eg, 1995, 1997-98), presumably as selectors experimented with players in an effort to strike the right combination, while the World Cup years themselves seem to be more stable.

The two years just after the cessation of Kerry Packer's World Series can be seen as being the most tumultuous in terms of team selection, with an average of over 1.5 new players per match. After that, the benefits of having a stable team became to be recognised, and the ratios dropped quickly, although not initially to the levels reached in the 2000s, Australia's ODI golden age.

There is much more that can be investigated here, including looking at other teams, and correlating team stability with team success.

Comments have now been closed for this article

Why not just look at the average correlation coefficient for the team for each season?

Ric - I always thought looking at calendar years is somewhat arbitrary. Will it not make more sense to look at the number of players used in each consecutive block of 20 matches on a rolling basis?

Ric's comment: I think no matter what length of time you use, its going to be arbitrary - one can argue that taking 20-match blocks is just as arbitrary, with the end of one block possibly falling in between two matches in the same series.

Hi Ric It is generally understood that instability in a team will lead to a team loosing more often. It would be interesting to add another column to your list for % of games won for each year. To compare this against other nations would also give an idea of depth and strength of the side. Cheers Ben

Ric's comment: Yes, Ben, adding that results data in gives another whole dimension to this analysis. I'll maybe have a look at doing that soon - thanks for the suggestions.

Following Pelham Barton's comment, the fact is that not all changes to a team are equal contributors to instability. So if you had the same 11 every game, but a new captain each match, that would destabilise a team much more than swapping a few fringe players in and out. Also, having a different opening combination is more destabilising than having a different no.6 or different 4th bowler. How you accommodate all these variabilities is difficult! I'd suggest incorporating into your equations a couple of extra measures, eg: no. of players who played in, say, 80% of the games (to see how big the team's core versus its periphery was - a larger core indicates a more stable team); no. of captains/vice captains used; no. of different new ball combinations, and nos. 1,2 & 3 combinations with the bat; no. of different wicketkeepers.

Ric's comment: You could do all of these things, but it is making it much more complex. My aim was to create a simple measure quantifying team changes over the course of a year. I do like your idea of looking at the core component of a team, and I am sure there would be way of using that as a basis of measuring these team changes.

Good post but you must also take in other considerations - e.g. the no of matches per year has been increasing and with that probability of getting injured. So therefore to replace them more players are called (like what we saw in Australia's tour of India)

Ric's Comment: I think the stability index takes that into account, in that the number of extra players is divided by the number of additional matches.

I like the way you account for different number of matches played in a year - your measure is simple, easy to understand, and gives sensible answers.

Where I would suggest a slight modification is that there is more to team stability than the simple number of players used. A team with 10 fixed players which makes just one change half way through the year is more stable than a team which keeps choosing a different XI from the same squad of 12 players.

However, simply counting team changes instead of new players will not do either. A team with 10 fixed players which keeps swapping the same two players over is more stable than one with 10 fixed players and a different 11th player each time. Your measure accounts for this.

A measure only slightly more complicated than yours would count 1 each time a completely new player (for that year) joined the team and 0.5 if a player returned to the team. Then divide (as with your measure) by one less than the total number of matches.

Hi Ric,

In the first place thanks for the post. Good to see a meaningful analysis. This was something I myself wanted to do... As you rightly mentioned Aussies have been quite stable over the last decade...last two years they are trying out players to figure out their new combination for future...Im sure from 2010 to 2020 the team would again regain stability....

As an addition to your post I would like to mention that in Test, from 1990 to date, England have had 103 debutants (highest among the top 8 nations) followed by WI (87) & Pak (81)...Clearly this shows the kind of instability all 3 teams are in... most stable is again Aussies (62) closely followed by SL (66), SAF (68). The teams in mid range India(72) & NZ (74) Interesting that SL have had quite a stable side in the last 2 decades...

Why not just look at the average correlation coefficient for the team for each season?

Ric - I always thought looking at calendar years is somewhat arbitrary. Will it not make more sense to look at the number of players used in each consecutive block of 20 matches on a rolling basis?

Ric's comment: I think no matter what length of time you use, its going to be arbitrary - one can argue that taking 20-match blocks is just as arbitrary, with the end of one block possibly falling in between two matches in the same series.

Hi Ric It is generally understood that instability in a team will lead to a team loosing more often. It would be interesting to add another column to your list for % of games won for each year. To compare this against other nations would also give an idea of depth and strength of the side. Cheers Ben

Ric's comment: Yes, Ben, adding that results data in gives another whole dimension to this analysis. I'll maybe have a look at doing that soon - thanks for the suggestions.

Following Pelham Barton's comment, the fact is that not all changes to a team are equal contributors to instability. So if you had the same 11 every game, but a new captain each match, that would destabilise a team much more than swapping a few fringe players in and out. Also, having a different opening combination is more destabilising than having a different no.6 or different 4th bowler. How you accommodate all these variabilities is difficult! I'd suggest incorporating into your equations a couple of extra measures, eg: no. of players who played in, say, 80% of the games (to see how big the team's core versus its periphery was - a larger core indicates a more stable team); no. of captains/vice captains used; no. of different new ball combinations, and nos. 1,2 & 3 combinations with the bat; no. of different wicketkeepers.

Ric's comment: You could do all of these things, but it is making it much more complex. My aim was to create a simple measure quantifying team changes over the course of a year. I do like your idea of looking at the core component of a team, and I am sure there would be way of using that as a basis of measuring these team changes.

Good post but you must also take in other considerations - e.g. the no of matches per year has been increasing and with that probability of getting injured. So therefore to replace them more players are called (like what we saw in Australia's tour of India)

Ric's Comment: I think the stability index takes that into account, in that the number of extra players is divided by the number of additional matches.

I like the way you account for different number of matches played in a year - your measure is simple, easy to understand, and gives sensible answers.

Where I would suggest a slight modification is that there is more to team stability than the simple number of players used. A team with 10 fixed players which makes just one change half way through the year is more stable than a team which keeps choosing a different XI from the same squad of 12 players.

However, simply counting team changes instead of new players will not do either. A team with 10 fixed players which keeps swapping the same two players over is more stable than one with 10 fixed players and a different 11th player each time. Your measure accounts for this.

A measure only slightly more complicated than yours would count 1 each time a completely new player (for that year) joined the team and 0.5 if a player returned to the team. Then divide (as with your measure) by one less than the total number of matches.

Hi Ric,

In the first place thanks for the post. Good to see a meaningful analysis. This was something I myself wanted to do... As you rightly mentioned Aussies have been quite stable over the last decade...last two years they are trying out players to figure out their new combination for future...Im sure from 2010 to 2020 the team would again regain stability....

As an addition to your post I would like to mention that in Test, from 1990 to date, England have had 103 debutants (highest among the top 8 nations) followed by WI (87) & Pak (81)...Clearly this shows the kind of instability all 3 teams are in... most stable is again Aussies (62) closely followed by SL (66), SAF (68). The teams in mid range India(72) & NZ (74) Interesting that SL have had quite a stable side in the last 2 decades...

No featured comments at the moment.

Hi Ric,

In the first place thanks for the post. Good to see a meaningful analysis. This was something I myself wanted to do... As you rightly mentioned Aussies have been quite stable over the last decade...last two years they are trying out players to figure out their new combination for future...Im sure from 2010 to 2020 the team would again regain stability....

As an addition to your post I would like to mention that in Test, from 1990 to date, England have had 103 debutants (highest among the top 8 nations) followed by WI (87) & Pak (81)...Clearly this shows the kind of instability all 3 teams are in... most stable is again Aussies (62) closely followed by SL (66), SAF (68). The teams in mid range India(72) & NZ (74) Interesting that SL have had quite a stable side in the last 2 decades...

I like the way you account for different number of matches played in a year - your measure is simple, easy to understand, and gives sensible answers.

Where I would suggest a slight modification is that there is more to team stability than the simple number of players used. A team with 10 fixed players which makes just one change half way through the year is more stable than a team which keeps choosing a different XI from the same squad of 12 players.

However, simply counting team changes instead of new players will not do either. A team with 10 fixed players which keeps swapping the same two players over is more stable than one with 10 fixed players and a different 11th player each time. Your measure accounts for this.

A measure only slightly more complicated than yours would count 1 each time a completely new player (for that year) joined the team and 0.5 if a player returned to the team. Then divide (as with your measure) by one less than the total number of matches.

Good post but you must also take in other considerations - e.g. the no of matches per year has been increasing and with that probability of getting injured. So therefore to replace them more players are called (like what we saw in Australia's tour of India)

Ric's Comment: I think the stability index takes that into account, in that the number of extra players is divided by the number of additional matches.

Following Pelham Barton's comment, the fact is that not all changes to a team are equal contributors to instability. So if you had the same 11 every game, but a new captain each match, that would destabilise a team much more than swapping a few fringe players in and out. Also, having a different opening combination is more destabilising than having a different no.6 or different 4th bowler. How you accommodate all these variabilities is difficult! I'd suggest incorporating into your equations a couple of extra measures, eg: no. of players who played in, say, 80% of the games (to see how big the team's core versus its periphery was - a larger core indicates a more stable team); no. of captains/vice captains used; no. of different new ball combinations, and nos. 1,2 & 3 combinations with the bat; no. of different wicketkeepers.

Ric's comment: You could do all of these things, but it is making it much more complex. My aim was to create a simple measure quantifying team changes over the course of a year. I do like your idea of looking at the core component of a team, and I am sure there would be way of using that as a basis of measuring these team changes.

Hi Ric It is generally understood that instability in a team will lead to a team loosing more often. It would be interesting to add another column to your list for % of games won for each year. To compare this against other nations would also give an idea of depth and strength of the side. Cheers Ben

Ric's comment: Yes, Ben, adding that results data in gives another whole dimension to this analysis. I'll maybe have a look at doing that soon - thanks for the suggestions.

Ric - I always thought looking at calendar years is somewhat arbitrary. Will it not make more sense to look at the number of players used in each consecutive block of 20 matches on a rolling basis?

Ric's comment: I think no matter what length of time you use, its going to be arbitrary - one can argue that taking 20-match blocks is just as arbitrary, with the end of one block possibly falling in between two matches in the same series.

Why not just look at the average correlation coefficient for the team for each season?