Re: New Black List Thread - Franklin Leonard answers your questions
Have to admit, there is validity to this statement. When the only criteria for scoring a script is a "gut feeling" about whether one would recommend it to a peer/superior in the industry, that inherently creates randomness.
Yes, there is subjectivity in any scoring system (as is frequently argued in these forums); I won't dispute that. But some systems invite more subjectivity than others. A lot more. You can't directly compare the BL to contests, because their overall assessments are usually tied directly to specific scores (whereas, as is noted in BL's policy, it's overall score is not a direct reflection of component scores).
It's why I find the statement that "fewer than 4% of scripts with more than one paid rating (that's one in 25) had a standard error of mean >= 1.5 (e.g. 5,8 or 4,7, etc.)" a bit questionable. You'd think a system like this that relies so heavily on gut feelings would exhibit more disparity in scores. But -- even if these numbers are spot-on accurate -- it doesn't change the fact that a lack of defined methodology by definition makes a scoring system random.
The thing is, when you're paying for a service, you hope for a little bit more structure around it than that.
I can understand why there would be limited disparity among scripts scoring the highest (8s, 9s, 10s). But for everything scoring a 7 or below, it does very much seem like a crapshoot as to where you place along that scale.
Despite my belief in the power of and necessity for the BL, I continue to think that this is one of its biggest, most obvious flaws. I understand the desire to give readers flexibility -- but I kinda wish the BL would try to understand more that the way it currently works makes many users feel like they could easily be throwing away their money (and is a source of much of the frustration often thrown around here).
Originally posted by Alfred Parker
View Post
Yes, there is subjectivity in any scoring system (as is frequently argued in these forums); I won't dispute that. But some systems invite more subjectivity than others. A lot more. You can't directly compare the BL to contests, because their overall assessments are usually tied directly to specific scores (whereas, as is noted in BL's policy, it's overall score is not a direct reflection of component scores).
It's why I find the statement that "fewer than 4% of scripts with more than one paid rating (that's one in 25) had a standard error of mean >= 1.5 (e.g. 5,8 or 4,7, etc.)" a bit questionable. You'd think a system like this that relies so heavily on gut feelings would exhibit more disparity in scores. But -- even if these numbers are spot-on accurate -- it doesn't change the fact that a lack of defined methodology by definition makes a scoring system random.
The thing is, when you're paying for a service, you hope for a little bit more structure around it than that.
I can understand why there would be limited disparity among scripts scoring the highest (8s, 9s, 10s). But for everything scoring a 7 or below, it does very much seem like a crapshoot as to where you place along that scale.
Despite my belief in the power of and necessity for the BL, I continue to think that this is one of its biggest, most obvious flaws. I understand the desire to give readers flexibility -- but I kinda wish the BL would try to understand more that the way it currently works makes many users feel like they could easily be throwing away their money (and is a source of much of the frustration often thrown around here).
Comment