How A Ratings Service Can Deal With Subjectivity

Collapse

Announcement

Collapse
No announcement yet.
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    Re: How A Ratings Service Can Deal With Subjectivity

    Originally posted by MikeD View Post
    Goldman: I only had a math minor in college, so I think a statistician would have to give you an answer on the 2,5, and 8. This is an extremely broad range. Here's a simple example of handling ratings various ways. Let say the ratings are 9,8, and 4. Definitely 4 can be considered a skewed rating but how do you handle it? Here are some options:

    1. Eliminate the 4 entirely and compute an average = 8.5
    2. Average all three results = 7
    3. Compute a weighted average:

    40%x9 + 40%x8 + 20%x4 = 7.6

    Interestingly, 1 and 3 give a result that is probably closer to the truth than a simple average. All three are statistical treatments. I don't mean this analysis to be a final suggestion, but an example of how one statistiical approach can work with smaller numbers of ratings.
    Mike, I took a bunch of statistics courses in university and what you're suggesting would only be valid if you were dealing with large numbers of reviews.

    Most mothods of increasing the accuracy of statistical analysis involve different ways of interpreting a bell curve distribution.

    BUT YOU CAN'T GET A BELL CURVE FROM 3 VALUES.

    If each script had dozens of ratings, then sure, you could do something about that. But that's not the case for most scripts uploaded on BL3. So your suggestions are not a valid approach in this case.

    Comment


    • #17
      Re: How A Ratings Service Can Deal With Subjectivity

      Originally posted by MikeD View Post
      I'm suggesting a.....
      And I'm suggesting you get a good therapist...Or a clue...Whichever is most effective.

      Midnite

      Comment


      • #18
        Re: How A Ratings Service Can Deal With Subjectivity

        Originally posted by goldmund View Post
        OP -- That's true if you have lots of data. Then, you can be fussy and cut off the aberrant numbers. Then you can kill subjectivity and get a useful average.

        What is the statistical truth of, like, 3 ratings?

        One guy gave you a 2, one gave you a 5, one gave you an 8. What do you do?
        Here's my math: 2 plus 5 plus 8 equals PASS

        Comment


        • #19
          Re: How A Ratings Service Can Deal With Subjectivity

          Originally posted by goldmund View Post
          What do you do?
          Shoot the hostage.

          Comment


          • #20
            Re: How A Ratings Service Can Deal With Subjectivity

            By the way, if a script got a 9 or a 10, it may STILL be a pass for a lot of people.

            Comment


            • #21
              Re: How A Ratings Service Can Deal With Subjectivity

              Columbia passed on E.T.

              http://entertainment.time.com/2012/0...passes-on-e-t/

              Everyone passed on Back to the Future

              I'm sure others can come up with countless similar examples of studios passing on scripts/projects that are now universally acknowledged to be "great"

              Nobody knows anything

              Comment


              • #22
                Re: How A Ratings Service Can Deal With Subjectivity

                Originally posted by FranklinLeonard View Post
                I watched this movie at the Angelika theater during the summer of 1998 when I was working for the Legal Aid Society of Manhattan and living in the neighborhood where it was shot (Grand & East Broadway section of the Lower East Side). Needless to say, it made an impact.
                Is that why you spend so much time on your blacklist algorithms?

                Very cool. One of my favorite movies. The possibility that anything can be computed and thus predicted, and the impossibility (or insanity) of what it would take to do so.

                Comment


                • #23
                  Re: How A Ratings Service Can Deal With Subjectivity

                  MikeD --


                  I like what you've got going here. But I think we need to go next level...


                  Can you come up with a computative algorithm that would help us avoid Investor Bias? Or better yet -- audience and critic bias?

                  Comment


                  • #24
                    Re: How A Ratings Service Can Deal With Subjectivity

                    I've been busy and couldn't keep up with this thread, but to respond to a few comments:

                    What I did does not use the bell approach which requires greater numbers to isolate skewed results. With many numbers or scores available, the numbers can be arranged to show the typical ratings, and then averaged in the typical range. Great statistical concept.

                    Now, a good question, what if the 8, and 9 came from a poor source and the 4 from a good source, then the 4 would be given more weight in the average. You can weight the source, as follows:

                    20%x8 + 20%x9 + 40%x4 = 5

                    TO MB: This approach is for rating scripts from new voices/new writers, and not intended to be used by the pro Hollywood industry, not for managers and agents.

                    Comment


                    • #25
                      Re: How A Ratings Service Can Deal With Subjectivity

                      There will always be:
                      • scripts better than yours.
                      • scripts worse than yours.
                      • people who love your script.
                      • people don't care for your script.
                      Nobody knows anything... you're god damned right.
                      All the best,
                      Lee
                      __________________________________
                      I'm not just a screenwriter...
                      I also write and illustrate picture books!

                      Comment


                      • #26
                        Re: How A Ratings Service Can Deal With Subjectivity

                        I made a slight mistake in the posting above.

                        The calculation for a high rated 4 reader should be:

                        20%x9 + 20%x8 + 60%x4 = 5.8

                        That's a lot of weight to give a high quality reader, 60% weight.

                        For 50% weight, the calculation would be:

                        25%x9 + 25%x8 + 50%x4 = 6.25

                        When all readers are of equal quality, and they differ by subjectivity, then the prior calculation is right:

                        40%x9 + 40%x8 + 20%x4 = 7.6

                        Comment


                        • #27
                          Re: How A Ratings Service Can Deal With Subjectivity

                          OK, you're winning me over. My only question now is how we rate the readers to determine how to weight their scores.

                          Might I suggest that each reader submit multiple coverages to more experienced readers, and we then determine a score for each reader, so we can then compare them to each other when they read a script?

                          The only flaw is that the readers reading the readers will need to be ranked and weighted. I suggest we just continue this process to infinity, until every reader and every script has an accurate number that reveals their objective worth.

                          Comment


                          • #28
                            Re: How A Ratings Service Can Deal With Subjectivity

                            I actually took the liberty of contacting Stephen Hawkin about this and he replied explaining he'd developed a whole theory, detailed below.

                            Better scripts = Higher scores dumbass
                            Script Revolution - A free to use script hosting website that offers screenwriters a platform to promote their scripts and a way for filmmakers to search through them.

                            Comment


                            • #29
                              Re: How A Ratings Service Can Deal With Subjectivity

                              Except that when readers diverge in their judgement as to which script is "better" than another one, there's no way to adjudicate who's right and who's wrong since it's stipulated to be a matter of taste. So neither reader can be right or wrong, and therefore neither script is better or worse. It follows that the ratings are by definition arbitrary because they cannot be adjudicated when disputes arise. And the fact that the ratings do in fact tend to converge pretty closely is meaningless without some method for predicting whether they will in any particular case. It could depend on a reader's rich sense of story structure and dialogue built up over years spent reading thousands of scripts, but it could just as easily depend on whether the reader was fed recently, or burned his/her tongue on coffee five minutes ago, or is just biased towards females.

                              Either there are facts about whether scripts are good or bad/better or worse, or there are no such facts. It cannot be both. The Black List wisely accepts that there are no such facts. It sells to writers what is ultimately an utterly arbitrary appraisal of their work which it deems more valuable than other such appraisals, due to the skill and experience of the people it employs to render verdicts and to the price it charges. That estimation of value too is a matter of taste, determined by market forces which just are the raw choices of individuals. At no point in the entire chain of appraisal, from initial reader judgement to consumer end purchase, is there any indication as to whether one movie or script is actually, really better than another. The very suggestion is completely meaningless. Let's not pretend otherwise, even in jest.

                              Comment


                              • #30
                                Re: How A Ratings Service Can Deal With Subjectivity

                                Originally posted by Hecky View Post
                                Except that when readers diverge in their judgement as to which script is "better" than another one, there's no way to adjudicate who's right and who's wrong since it's stipulated to be a matter of taste. So neither reader can be right or wrong, and therefore neither script is better or worse. It follows that the ratings are by definition arbitrary because they cannot be adjudicated when disputes arise. And the fact that the ratings do in fact tend to converge pretty closely is meaningless without some method for predicting whether they will in any particular case. It could depend on a reader's rich sense of story structure and dialogue built up over years spent reading thousands of scripts, but it could just as easily depend on whether the reader was fed recently, or burned his/her tongue on coffee five minutes ago, or is just biased towards females.

                                Either there are facts about whether scripts are good or bad/better or worse, or there are no such facts. It cannot be both. The Black List wisely accepts that there are no such facts. It sells to writers what is ultimately an utterly arbitrary appraisal of their work which it deems more valuable than other such appraisals, due to the skill and experience of the people it employs to render verdicts and to the price it charges. That estimation of value too is a matter of taste, determined by market forces which just are the raw choices of individuals. At no point in the entire chain of appraisal, from initial reader judgement to consumer end purchase, is there any indication as to whether one movie or script is actually, really better than another. The very suggestion is completely meaningless. Let's not pretend otherwise, even in jest.
                                God help me, I think I agree with this.

                                Comment

                                Working...
                                X