I think MPAA Chairman Chris Dodd is right to say, in the wake of the controversy over the initial R rating given to the documentary Bully (it was lowered to PG-13 after cuts), that the association’s ratings system, which carries great power, should be more transparent to the public. There’s a perception, fair or not, that the ratings weight certain content—like sexual content between gay couples—more heavily in moving towards an R rating, and that the system fails to acknowledge how context mitigates content. That last perception was at issue in Bully: the R rating depended on incidences of profanity deemed inappropriate for teenagers, despite the fact that those profanities were uttered by teenagers and directed at teenagers. More data about how the ratings panels make their decisions would help outside observers determine whether these perceptions of inconsistency and failure to contextualize were true, or to debunk them.
Discussing whether transparency might be a good idea is not the same thing as committing to it, of course. Releasing the exact counts of words that trigger ratings might be one place to start. And while making it clear who’s in the ratings panels might open up the possibility of bribery, it would also let outsiders look for patterns in raters’ behavior the same way political analysts score the leanings of judges. Any other thoughts on what data it would help to have in the open? It’d be nice to have this be the kind of thing that doesn’t just float into conversation and disappear.