Elevate the power of your work
Get a FREE consultation today!
This IMDC customer solution brief provides an overview of the outlook, drivers, opportunities, and infrastructure challenges created by the boom in generative AI.
Artificial Intelligence [AI] has made extraordinary advances over the last year, driven by the phenomenal uptake and interest in generative AI apps such as ChatGPT, DALL-E, GitHub Copilot and Stable Diffusion, which can create impressive images, answer complex questions, write essays, create websites, and even write computer code, all in answer to a short non-technical text query.
Traditionally, AI systems have been trained on large amounts of data to identify patterns. Generative AI goes a step further, using complex systems and models to generate new outputs in the form of images, text, or audio using natural language prompts. This new model promises massive social and economic value due to its ability to handle multiple repetitive tasks quickly and accurately, combined with this ability to break down communication barriers between humans and machines.
There is inevitably widespread debate over the total economic value and timeframes for the growth of generative AI, but the numbers are huge. Goldman Sachs estimates a value of almost $7 trillion (7% of global GDP) within a decade, while McKinsey estimates that it could add the equivalent of $2.6 trillion to $4.4 trillion annually (3-4% of global Gross Domestic Product). This would increase the total current forecast economic impact of broader AI non-generative applications by between 15 and 40%.
As well as accelerating innovation and changing the way people work, the new technology will have a huge impact on IT infrastructure. Generative AI models use graphics processing unit (GPU) chips which require 10–15 times the energy of a traditional CPU, because they have more transistors in the arithmetic logic units. This requires more compute, network, and storage infrastructure than any previous technology.
Many generative AI models have billions of parameters and require fast and efficient data pipelines in their training phase, which can take months to complete. These training phases then need to be repeated as new data is added.
After the training phase, the second phase in generative AI delivery is inference, or using the application to respond to inquiries and return data results. If the model is for general use, this requires more geographically dispersed infrastructure that can scale quickly and provide connectivity to the applications with lower latency, particularly for real-time or immersive applications. This means more data centers in more locations, a model which differs from the centralised public cloud model that currently supports most applications. Power consumption during inference is still very high: A Google search can power a 100w lightbulb for 11 seconds, but a single ChatGPT session consumes 50-100 times more power than this.
Enter your information to access the full content.