Fascinating piece in The Observer this weekend, written (beautifully) by a journalist who works for one of the big AI companies, providing ‘gold standard’ examples of writing to help train their Large Language Model, and improve their Chatbot’s answers. (Thanks to excellent writer-buddy Paul for passing it on).
The gut reaction to this for me – as anticipated by the writer – is that this seems tantamount to the best turkeys in the flock helping sharpen the Christmas carving knives.
And yet, like almost everything with AI, things are not quite as simple as that. And, as I’ve explored in GOD-LIKE, our relationship with technology has always been hallmarked by contradiction and conundrum. All of our tools contain within them both the promise of our salvation and the seeds of our destruction.
The economics of this new boundary point are becoming clearer. Firstly, we are hitting a ‘data frontier,’ as the FT recently reported:
AI companies have made significant strides forward in the past 18 months, but have begun to run up against what experts describe as a data frontier, forcing them to trawl ever-deeper recesses of the web, strike deals to access private data sets or rely on synthetic data.
“There’s no more free lunch. You can’t scrape a web-scale data set any more. You have to go and purchase it or produce it. That’s the frontier we’re at now,” said Alex Ratner, co-founder of Snorkel AI, which builds and labels data sets for companies.
FT.com: https://www.ft.com/content/e6a4dcae-2bda-42de-8112-768844673cea
The free lunch was vast, and grotesque. Extraordinary amounts of data was used to train these initial models, and no one got paid. That is going to continue to amount to an historic injustice, one that people are very quickly not going to allow to be repeated – and ‘data reparations’ are likely to be seen in courts very soon.
But what that means is that, if we see some benefits from having high quality LLMs, we are going to need high quality human graft to create them – and that will need paying for. That’s what’s paying for the lunch of the writer of The Observer piece.
Beyond that though (and what I spend much of my day job exploring) – there is widening acceptance that AI will not be a silver bullet for productivity, and that lazy deployment of it to replace workers will not be smart business.
Again in the FT, Google’s James Manyika noted in an interview:
“You don’t win by cutting costs. You win by creating more valuable outputs. So I would hope that those law firms think about, ‘OK, now we have this new productive capacity, what additional value-added activities do we need to be doing to capitalise on what is now possible?’ Those are going to be the winning firms.”
James Manyika, in the FT: https://www.ft.com/content/2c122092-51ab-4529-b733-ac466f338cb5
So there is this potential ‘narrow path to good AI’ that economists like Daron Acemoglu set out, where we do valuable, high-skill and well-paid work to help hone these LLMs into tools that are reliable and effective, and then deploy them not to replace workers, but to augment labour and make our work more valuable and productive.
How will we get there? Well, as I set out in GOD-LIKE, it will take action at the system level – with global governance aligned and fiscal policies incentivising investment in ‘good work’, not just more machines – and act firm level, with decision-makers making better decisions than they have done in the past, where longer-term consequences of the pursuit of short-term profits were not well considered – and at the individual level too, with each of us committing to our flourishing and learning to use these systems with greater skill.
None of this is easy. But if we’re to do more than sharpen the carving knives, it’s urgent work as Christmas is coming pretty fast.
Buy a copy of the book here.