Interesting stuff. A few methodological flaws, however. This ignores team composition. Comparing Chicago A from ACF Fall (Cohn, Ferrari, Koo, Reece), NAQT SCT (Reese, Moriarty, Smith, Koo), and ACF Regionals (Yaphe, Maddipoti, Zimpleman, Scranton) is probably a bit shaky and I am sure other schools had similar changes in lineups across tournaments. Heck, a few years back, I tried to see how well SCT performance predicted ICT performance, but I couldn't do anything meaningful because most teams changed lineups between the two tournaments. Second, opponents changed from tournament to tournament. You might want to instead note combined game score while calculating field- average combined game score to give an approximate measure of field strength. Third, if you want to look at lower-tier teams, the 25th percentile is probably more meaningful than the median score. Also, you should note the placement teams in respective tournaments so that one can easily see which teams are the low-end teams of interest. Finally, total points scored is not the only measure of difficulty. I hypothesize that bad teams expect to do poorly on bonuses and that percentage of tossups answered might be a better predictor of how bad teams feel about the difficulty of tournaments (given that it is impossible to count tossup answers that teams have heard of). My guess is that a field average of about 80% tossups answered is the approximate comfort level and that teams start to pout when individual games start having less than 70% of tossups answered. I'd be curious to see the results, but I don't feel like doing the grunt work. Anyone interested? --Anthony de Jesus
This archive was generated by hypermail 2.4.0: Sat 12 Feb 2022 12:30:48 AM EST EST