In a bizarre case yesterday, Pajiba aired a long, investigative piece into a new book that gamed the New York Times Bestseller list. It gamed its sales through a bulk buying campaign by the publisher or the author (later confirmed by book vendors) and even shipped with a plagiarized cover.
The story started when YA author Phil Stamper started questioning how a publisher nobody had ever heard of managed to land a book in the NYT bestseller list, and started digging (and sharing) on Twitter. All evidence indicated that the book was written by the author to secure an acting gig in the eventual movie adaptation. Vulture, reporting 9 hours ago, reported that the book was pulled from the NYT list.
A week ago, author (and physicist, and poet) Samuel Peralta posted this:
Peralta’s new anthology sold some 30,000 copies within a week, made it to the USA Today lists, and, had the NYT list been based purely on data, it should have shown up there. It didn’t. For the record – Brandon Sanderson, as reported by himself in his BYU class, hit the top of the list with 10,000 sales.
We all know indies who will never end up on the NYT. Andy Weir was an outlier: consider the brilliant “We Are Legion” by Dennis Taylor, a science fiction series that has garnered thousands of loving reviews on Goodreads and Amazon. Or any of Bella Forest’s books – she’s an author who sells as much as Stephen King over Amazon. Despite having the data, none of these people are going to show up there.
We also know the lists can be gamed.
The New York Times list is a holdover from the days when the influential New York literary critic was a key player in discovering literature. In the days of Google, Amazon search and book discovery services, it’s practically a relic.
And it is, quite clearly, no longer an indication of success: Amazon’s bestseller lists are a far better indicator. The NYT’s lists are opinionated lists, not quantitative.
So why is it important?
The problem with sales data lists are that there is plenty of crap that sells. Consider, for example, Fifty Shades of Grey. Base a list purely off sales data and you’re more likely to see:
a) the “penny thrillers” of our time – cheap entertainment that strikes gold for some reason
b) the James Pattersons and Kings, who are powerful brands in their own right
c) and the books with huge amounts of marketing muscle behind them
There are plenty of such data-based lists, and there always a certain mix of literary gems to whatever is rolling off the shelves, regardless of how good they are.
Literary critics are generally are arbiters of quality, which is a rather vague and ephemeral thing that we cannot really define (see Zen and the Art of Motorcycle Maintenance and Phaedrus’ arguments on quality). For me, Hugh Howey’s Wool is quality. FSOG is not. It’s a very subjective and very human thing that always be captured in the data. We don’t need literary critical lists – but I personally want LOTS of them – to capture a vast sample space and give us good books filtered through many human opinions.
I believe what the NYT should do is to stop pretending it’s data-based. It isn’t. It’s arbitrary and subjective. Instead, the NYT should embrace the arbitrary and subjective! Give us what the NYT’s staff thinks is a great literary masterpiece and leave the qualitative rankings to USA Today and Amazon. Toss the Neilsen Bookscan or whatever other data sources they look at. They’re doing it anyway – might as well call a spade a spade and get on with it.