Recently I was asked to be on the Panel to mark the lists for Cancon for comp. Over the years I've been on (and organised) a fair few panels, however recently I have shifted to a prefering a Hard Cap system for my own tournaments (as regular readers will know).
However I did think it would be interesting for readers if Greg Johnson, the Cancon TO, went through his experiences and thought processes on the comp system he used. Greg agreed and kindly prepared this report for Fields of Blood:
Soft Comp at Cancon – how it went
There has been some discussion on this blog about the comp systems being used in tournaments these days, and in particular around the apparently uniquely Australian persistence with soft comp rather than hard caps. Cancon ran on the weekend, and was the largest Australian tournament in over a year, by a considerable margin (there were 76 players). Soft comp was used at this event, and it seems a good opportunity to look back at how the system performed.
Background
First of all, I should point out that Australia persisting with the soft comp system is perhaps a generalisation. A lot of players in Queensland and New South Wales have become quite disenchanted with the system due to some bad experiences in recent events, and there has been a swell of support for hard caps (or even no comp at all) from these areas. It is only really in Victoria where soft comp appears to reign supreme, and a lot of this can be attributed to the same core of players running most of the events in Melbourne. Players from the Hampton Games Club (HGC) ran something like two thirds of all the Warhammer Fantasy events in Victoria last year, so some similarities are always going to pop up.
For the geographically challenged among you (or anyone not familiar with Australia), Canberra (and by association, Cancon) is not in Victoria. The main reason it used soft comp again this year was that HGC ended up organising Warhammer Fantasy at the event, for want of anyone else having volunteered. This means our approach was employed again there, with the player pack being almost identical to the one used for Axemaster late last year. Anyway, that is a bit of background for you. On to the event itself…
The Cancon approach
Comp at Cancon was worth a total of 80 points out of 300 (160 battle, 80 comp, 40 sports and 20 painting). This translated to a maximum of 10 points for each of the 8 games. Comp scores were combined each round with battle points in order to determine rankings and opponents – a kind of hybrid comp-battle system, however not what would be considered a true comp-battle system. It meant after round 1, a player with 20 battle points and 1 comp (ie a rock hard list) could find himself facing a list with only 11 battle points, but 10 comp (ie a joke of a list that somehow scratched a marginal win). This is an extreme example, but it serves to illustrate the point.
The system above was used to prevent the situation where a player gets moderate results over the duration of the tournament, then gets a massive boost from his comp score at the end, leaping all the tournament leaders and taking out 1st place (this occurred in Orktoberfest a couple of years ago, where an Ogre army played for draws each round and banked on its huge comp score to get it over the line). This scenario is extremely unfair, as the player can cruise through the middle field, avoiding all the “contenders” before coming from nowhere to take the prize. It leaves players with no idea where they stand until the end.
The system used at Cancon avoids the situation I have just described. By including the comp score each round, it is possible to see who the overall leader is (ignoring the less decisive elements of sports and painting), and a player going into the last round in the lead should be confident that he is either facing his main rival, or has already faced him in an earlier round. Put simply, everyone knows where they stand. It’s impossible to get a “bunny run” through the tournament and bank on comp to win the day – you might get a significant boost each round by using a soft army, however it’s just going to propel you up the leader board toward other players who are doing well. You will still have to fight your way to the top.
So, did it work? How do you assess such a thing? I suppose the ultimate goal of comp is to provide a level playing field for all players, regardless of the lists they have constructed. It’s an impossible goal, but there can be degrees of success.
The podium
The army that won the tournament was Ben Leopold’s highly unusual Skaven list. It contained:
Queek Headtaker leading 35 Stormvermin with the Razor Standard
Grey Seer with 4+ ward, Skalm, Dispel Scroll
Assassin with Potion of Strength, Tail Weapon
Assassin with Smoke Bombs, Tail Weapon
Assassin with Potion of Foolhardiness, Tail Weapon
BSB Chieftain with Armour of Destiny, Halberd
Chieftain with Dragonhelm, Halberd
40 Slaves
40 Slaves
35 Plague Monks with Plague Banner
8 Gutter Runners
The list received 8 comp, which to be honest is perhaps 1 point higher than it deserved, in my opinion. It benefited from rounding up once the scores from the 3 comp judges were combined, and it probably also enjoyed a bit of shock value when they were assessing it – no Abomination, no real shooting, no Screaming Bell. Queek can be dangerous, but is only protected by a 3+ armour save – so he’s hit and miss. Regardless of all this, the list is not as tough as a normal Skaven list. And Ben had to play the players who came 2nd and 3rd at the very least, due to his heading for the top tables very early on – and staying there. Even if he was arguably getting 1 more comp point than he should have, you can’t argue that he fought well for the title.
In second place we had Garry Ingram. His Ogre list received a 6 for comp, and it contained the following:
Slaughtermaster with Fencer’s Blades, Trickster’s Helm, The Other Trickster’s Shard (very tricky, this guy). Level 4 using the Lore of Beasts
Bruiser BSB with Dragonhelm, Dawnstone, Heavy Armour, Ironfist, Lookout Gnoblar
Butcher with Dispel Scroll, Ironfist. Level 2 using Lore of the Great Maw
9 Ogres with Ironfists, musician and standard
9 Ogres with Ironfists, musician and standard
6 Ironguts with Standard of Discipline, musician and standard
3 Maneaters with Scout, Immune to Psychology, Extra Hand Weapons and musician
1 Sabretusk
1 Sabretusk
10 Gnoblars
4 Mournfang Cavalry with Heavy Armour, Ironfists, standard, musician and Dragonhide Banner
Ironblaster
The list is competitive, however it’s by no means as hard an Ogre army as you will see (and we saw plenty at Cancon). There is only 1 Ironblaster and nowhere near as much worthless chaff as some players employ with the list. The Slaughtermaster has chosen a moderate Lore of Magic and is not employing a Greedy Fist (which GW inexplicably turned into a nightmare with their FAQs). There are no Poison/Sniper Maneaters. The army does contain the Mournfangs with the Dragonhide Banner (considered by many to be a mandatory selection), however the list is nowhere near what would generally be considered optimal. Garry played the other players in the top 3, so obviously he didn’t get an easy run either. Well played, sir.
Rounding out the top 3, we have Dino Zanon. As usual, he had a pretty tough Daemon list.
Great Unclean One with Balesword, Stream of Bile. Level 2 with Lore of Nurgle
Herald of Tzeentch BSB with Standard of Chaos Glory, Spell Breaker, Master of Sorcery. Level 2 knowing all spells from the Lore of Shadow
Herald of Khorne on Juggernaut with Armour of Khorne
Herald of Khorne
30 Bloodletters with full command
20 Plaguebearers with full command, Standard of Seeping Decay
10 Pink Horrors with full command
7 Furies
7 Furies
3 Flamers with champion
1 Fiend of Slaanesh
1 Fiend of Slaanesh
Dino’s list received a 4 for comp. I would have said this was at least a point too much – the list is not the hardest Daemon army I have ever seen, but it’s still a very tough proposition for most opponents. I feel that the list benefited from a pattern I found at the lower end of the scoring spectrum. The judges did not lean as hard as I would have expected on the toughest lists – possibly they were leaving space for even tougher armies in their calculations. Only 3 lists scored 2/10 for comp, and nobody scored a 1 (possibly a symptom of one of the judges electing to score the lists out of 5 rather than 10, which made it impossible for him to hand out 1s if he chose to, and then rounding taking its toll). As with the others, Dino did not enjoy an easy run of opponents – had he played poorly he would not have ended up on the podium.
Overall
Overall the average list score for the event was 5.11 out of 10, if you don’t count penalties that were applied for some late submissions. This would suggest that the event was not the complete cheesefest that some players clearly feared it would become, given there were no hard restrictions. The average score in the top 10 was 5.3, with scores of 4, 5, 6, 7 and 8 all represented (with the 7 and 8 skewing the numbers above average). Neither hard nor soft lists had a monopoly on the top tables. I think this would suggest that the system worked. However, there were a few key considerations that went into the process:
The right panel
One of the great difficulties regarding panel comp is finding the right people to score the lists. When comp scores are worth a lot, getting them wrong can torpedo a player’s tournament before it begins. Finding a group of players with sufficiently broad knowledge across all armies and the ability to assess the relative strengths of the specific builds is more difficult than some people might think – especially when a lot of players are already playing in the event. If you can’t find the right people for a comp panel, then it’s possible that you can’t use the approach at all.
The right weighting
If you’re going to use soft comp scores, they have to be worth something. There have been plenty of tournaments in the past where a soft score was allocated, however it was not worth enough to significantly affect the outcome of the tournament. The hard lists could bully their way through and still be pretty confident of coming out in front. I believe that making comp scores worth half as much as battle was enough to persuade some players that taking the hardest list possible might be counter-productive.
The right approach
The system used for Cancon was not a true “comp-battle” approach. A fully implemented comp-battle system compares the comp scores of direct opponents each round, and applies the difference as a bonus to the weaker army, and a penalty to the stronger one. This means that two equally balanced armies (be they hard, soft or in-between) will get no bonuses, whereas on another table a soft list getting pounded on by something horrible will get a big score by way of compensation. This approach effectively serves as a “strength of schedule” consideration, rather than an outright mark for the list.
I can see the argument for using a true comp-battle approach in that it compensates players for uneven matches. It could be that it is the fairest method of doing things. However, it does less to discourage players from taking very tough armies. A tough army that repeatedly tussles with other tough lists will suffer no penalty as a result, and a player may choose to bank on that. The Cancon approach actively encourages players to aim for more comp marks, because it’s a guaranteed boost to your score each round. If you take a hard list, you are knowingly handicapping yourself in this regard. Instead you are relying upon performing strongly each round, and beating down any softer lists as their greater comp scores boost them into contention. I was happy with how the Cancon method worked out, however I am not completely decided in regard to what the “best practice” is as a whole. It may depend on how you want your event to pan out.
Regardless of which approach taken, I firmly believe that having the comp applied as part of the draw was the right decision. It goes hand-in-hand with the scores needing to be worth enough that they make a difference, but there is nothing to be gained by players’ scores being applied in a lump at the end and having the leader board turned upside-down.
Conclusion
In the end, I was happy with how the comp system at Cancon worked out. Some players were unhappy with their comp scores, and this is almost always going to be the case. So long as you have a comp panel that you feel you can support however, this can be accepted as a difference of (potentially biased) opinion. In truth I heard very few complaints, and I don’t recall anyone raging at the end results. The event went well, and I would happily use this scoring system in the future. Whether players would change their lists next time round is something only they could answer.
I continue in my belief that soft comp can in fact enhance the overall tournament experience for a lot of players. It encourages diversity in list design, and it also shifts the emphasis away from the “hardest legal list” that hard caps can result in. This is dependent upon it being done right, of course. Done wrong, soft comp can turn people against the system and drive players away from an event. Perhaps that alone is an argument for hard caps, however at this point I believe that the gaming community has the resources to make it work.
You can read more of Greg's thoughts on Warhammer issues here
I should mention that the threat of list resubmissions was used at Cancon, however it was only used for a single list - one that had a massive amount of shooting, no magic and no real combat. The panel asked for the list to be altered - not because it was over-powered, but because it was likely to lead to some games that would be less than fun for both parties.
ReplyDeletecan i just mention the fact that there are resubmissions allowed in the comp judging, this effectively stops people going as hard as possible, taking a 1 and rolling face. -noakes
ReplyDeletewoops greg mentioned it as i was writing, feel free to remove these comments -noakes
Delete