AI: Risks and opportunities
How are GPs harnessing the power of AI, and why is it still wise to tread with caution?
Artificial intelligence and machine learning technologies have been around for some time, but the step changes over the past few years – in particular, the launch of generative AI – are creating opportunities and risks across the board. Here, a group of seasoned GPs explain how they are harnessing the power of AI, but also why investors should tread with caution.
Jay Eum, Managing Partner, GFT Ventures, on AI’s role in autonomous trucking:
“There is a lot of talk about robo-taxis and making consumer vehicles autonomous, but that is incredibly difficult because you have these vehicles driving on local roads, where there are people and pets that may do unpredictable things and you may well have passengers.
Long-haul trucking, by contrast, is far more predictable because you are largely moving along highways and there are fewer variables – it’s an easier puzzle to solve.
“We’re focusing on Korea because it’s a relatively small landmass and you can commercialise the application far sooner than you could in, say, China or the US.
There are only three major highways in Korea and so you can develop the training models here much more easily using cameras. That way, you can develop the capability in a small, domestic market, then take it to larger markets internationally.”
Akio Tanaka, Partner, Headline, on AI’s use in deal origination:
“We are an AI-enabled venture capital firm. While VCs tend to invest expecting to create innovation, most firms themselves are largely analogue – many will only invest in companies they can drive to, but the world is much bigger than that.
“We have developed our own software internally that allows us to track 7 million companies using our proprietary database and publicly available information. We track nearly 200 data sources, many of which are updated daily. We can track traffic, development information, human resources, online marketing spending, among other data points across these companies. The technology is powerful enough to predict, say, which fintech businesses in Singapore will reach Series A rounds in the next 12 months, with an 80% accuracy. It means that we can source deals in an unbiased way. And while we can find opportunities through our human networks, we can also find them completely outside these.”
Mark Boggett, CEO, Seraphim Space Fund, on AI in space:
“AI applied to space and satellite imagery is incredibly powerful and there is a lot of activity here – we see between 50 and 100 deals an month. One of our portfolio companies, ICEYE, uses a satellite constellation to observe the earth in 50cm resolution day and night and even through cloud cover. It collects images from every square meter of the earth’s surface every three hours and uses AI to detect change and then infer what might happen next. This clearly has applications across almost every vertical, although currently the main customers are using it for defence and climate and sustainability.
“Insurers, for example, are significantly challenged by extreme weather events. In a flood, this technology can identify in real time, the flood perimeter, its depth and speed and which assets are affected. This helps them cut costs and time because they don’t need to send people into the field. This data is also a source of truth and so can be used for the humanitarian clean-up after disasters. We are also seeing sovereigns look closely at this as they recognise their own defence technology is woefully out of date. There’s now a sharp focus on getting access to the data and finding ways to use it in their systems.”
Lonne Jaffe, Managing Director of Insight Partners, on the genesis of generative AI and why the disruption will be different this time:
One reason we are being cautious around AI investments is that the risk and disruption from recent developments, and the balance of power between start-ups and incumbents, has not been playing out the same way as the past few waves of disruption, such as the shift to the public cloud or mobile devices.
It’s worth taking a look back at how we’ve got to where we are today. We’ve seen major progress in machine learning and AI over the past 20 years. The deployment of neural networks in the early 2010s saw the technology used largely as prediction, recommendation, and classification systems that could, for example, recommend ecommerce products, identify cancerous polyps from a colonoscopy video, or take photos of damaged cars and then work out how much a repair might cost.
There were two significant scientific breakthroughs in 2017. One was at OpenAI, which at the time was running experiments to predict the next letter in a body of text by feeding a neural network the text of product reviews. One of the neurons in this neural network learned how to identify whether a sentence in a product review was positively or negatively toned – the developers didn’t ask it to do this; it just happened, like an emergent property.
There were two significant scientific breakthroughs in 2017. One was at OpenAI, which at the time was running experiments to predict the next letter in a body of text by feeding a neural network the text of product reviews.
Initially, attempts to develop this further failed because the architecture OpenAI was using wasn’t designed for enough scale. But just a few months later, Google published a paper laying out a new architecture – the Transformer – that was more scalable.
OpenAI rebuilt its neural networks using the Transformer architecture, scaled it up and we got GPT2, GPT3, and then ChatGPT and GPT4 – which can do everything from suggest software code, draft blog posts, or write poetry. We’ve seen similar innovations with image and video generation. While these systems are still predicting – predicting the next token in text – they are becoming qualitatively different. They are more like systems of creation.
When new technologies emerge that drive down prices and costs, economists talk about two kinds of impact on other goods, tasks, and businesses. First are economic substitutes to the activity that is going down in price, where the substitute becomes less valuable because of the price decline. This is what people mean when they talk about disruption to a company’s business model, or robots taking jobs. Other activities are economic complements – where a price decline in one good makes something else more valuable. An example of this might be an electric vehicle battery, whereby if engineering developments make batteries cheaper and able to last longer distances for a given price, the electric vehicle (a complement to the battery) becomes more valuable.
OpenAI rebuilt its neural networks using the Transformer architecture, scaled it up and we got GPT2, GPT3, and then ChatGPT and GPT4 – which can do everything from suggest software code, draft blog posts, or write poetry.
While some activities and firms are worried about being disrupted as AI becomes an economic substitute, the second and third order effects can often depend on the industry power structure and the elasticity of demand. Sometimes elasticity of demand is low. Let’s imagine that it used to cost $15,000 to design a company logo, and with a future AI product release, you can now make dozens of high-quality corporate logos for just a few cents. Will companies all rush out to change their corporate logos since the price is so much lower? No, because the elasticity of demand in the corporate logo redesign market is low. By contrast, in many software subsectors, we’re used to prices dropping 90% and the market size going up by 10x. This is because software markets often have high elasticity of demand.
In many markets, the customer is paying for things like quality, trust, craftmanship, brand, the judgement of senior talent, and so existing firms may be able to capture some of the input cost declines as higher margins. Also, when prices go down, even if the market size for a given product sector decreases overall, the money doesn’t evaporate. The customer can spend the money elsewhere. The money flows to the next most valuable use. That’s why it’s important to look at the second and third order effects to understand who will capture the value – and the amount of money flow at stake here is extremely large.
One obvious shift we’ve noticed between the 2010s era systems of prediction AI and the newer systems of creation generative AI is how much easier it is for companies to implement useful generative AI capabilities in production. The years of work and cost of employing teams of expensive machine learning engineers just don’t seem to be as important with generative AI as they were for prediction systems — at least so far.
Existing players can leverage their distribution, product capabilities, data, and customer trust to launch powerful large language model features to compete with generative AI start-ups before they can get to scale. We’re tracking hundreds of pure-play generative AI start-ups, and we’ve already seen some get started, funded, and disrupted within this calendar year.
The previous generation of AI provided a big advantage to start-ups. They were nimble and by the time the large technology incumbents got moving, you saw that start-ups like Salesforce had already become large enough to enjoy platform, network and data scale effects. This time around, because it is easier to incorporate generative AI technology, the incumbents are often able to shift their existing products to take advantage of generative AI quickly and leverage their distribution to reach millions of people worldwide, before the startup competitor can get off the ground. This balance of power shift has massive implications from an investment perspective.
“It’s only been a few months, however. There are many reliability and safety issues people are legitimately concerned about, and other limitations with these systems today. And there are many ‘known unknowns’ such as whether larger, more closed, or smaller, more open models, will win out.