Re: Spec Scout
Good questions, Margie. I think I answered some of that in my last reply, but let me expand a bit.
Our coverage is one document showing all three readers' responses and scores. The only things that are averaged, for lack of a better word, are a) the synopsis (the first reader writes it, and then the three collaborate to refine it, if necessary), and b) the Spec Scout Score. Strictly speaking, that Score is not exactly an average, but the formula is a trade secret and that's as much as I'm allowed to discuss it.
One of the nice things about our system is that each reader's scores have to be justified by their comments, and vice versa. So while we can't eliminate the grumpy reader factor (the GRF) entirely, the GR at least has to get granular and prove their points based on the script.
One other way we work to minimize the GRF is for one of the founders to review each set of coverage and compare it to the others of that script. This whole thing is very subjective, obviously, and we don't want groupthink since reasonable people can and often do disagree as to the merits of a given piece of material.
We don't see as many major discrepancies between the coverages as you might think. When we do, it's usually because one reader hasn't followed our internal rubric. In those cases, we work with the reader to adjust their comments and scores appropriately (still their own opinions, just applied through the lens of our system).