It’s 2026, and the generative AI brought to the world by ChatGPT (and now it’s multitude of competitors) has been around for three years. The technology has changed industries, how companies work, the benchmarks for productivity and the global economy. It’s slipped into daily operations and is rewiring how companies work. It is supporting education and healthcare and optimising business models and markets.
It is also very likely to be the key enabler for the world’s first solopreneur unicorn. A business worth more than $1 billion run and managed by just one person.
Since its first early steps, AI has proven itself a fascinating and constantly evolving technology with immense potential. Today, just over three years since ChatGPT rocked the world with its incredible capabilities, AI is still sitting on the edge of its own innovation.
Weak vs. Strong: AI defined
In 2025, when this article was first published, generative AI – the AI found in solutions such as ChatGPT – was finding its feet across multiple applications and purposes. It was a chatbot for an airline, and it is a failsafe in a manufacturing plant, a financial services fraud-detection tool, and a quick way to summarise vast quantities of content into relevant bullet points. It was also what IBM defines as artificial narrow intelligence (ANI) or weak AI. Its intelligence is narrowed down to do certain things in a certain way, and is the most common form of AI.
That basic taxonomy hasn’t changed. GenAI is still nowhere near the concept of intelligence that can think for itself. This strong AI, or Artificial Generative Intelligence (AGI) is the intelligence of the self-determining system capable of the same level of cognitive thinking as a human being. IBM defines it as ‘a theoretical form of AI where a machine would have an intelligence equal to humans’. It is self-aware and it can learn.
However, what GenAI can do today compared to yesterday is like two different technologies. By 2024-25, GenAI expanded into code generation, agents capable of operating software, multimodal systems, and domain tools in science, medicine, law and engineering. While it’s limited compared to AGI, this description underplays how much more AI can do than when it was first released. AGI remains theoretical, but rapid advances in capability and autonomy have intensified the debate around how close current frontier models actually are to that threshold (1).
AI: An intelligent trendsetter
AI is still dominating the headlines. Since ChatGPT took the global stage in 2022, the technology has continued to change and evolve. In 2025, some of the biggest AI models – Claude by Anthropic, Gemini by Google and QwQ-32B by Alibaba – have undergone some radical upgrades. Anthropic’s Claude has expanded into 3.7 Sonnet and Opus 4, offering both hybrid and enhanced reasoning with coding and API abilities; while Google DeepMind’s Gemini 2.5 has introduced its own more powerful enhanced reasoning, long-context and multimodal capabilities.
Looking ahead, there are some trends shaping the future of AI within the business and consumer spaces, and these are some of the most interesting:
Strong AI isn’t necessarily what people want
A year ago, most experts felt that AGI was unlikely to appear any time soon. Today, that estimate has changed – some AGI leaders are forecasting AGI over the next three to 10 years, while most academics and experts are expecting it to only appear between 2040 and 2061. While there is disagreement around when it will come, most agree it is inevitable. It is expected to be less of a giant leap into intelligence and more of a gradual continuum as AI systems incrementally exhibit more capabilities.
That said, the evolution of AI towards this level of intelligence isn’t something that humanity is taking lightly. There are deep divides across the public and the experts, with the former remaining wary and the latter more positive. These divides sit across roles, gender, country and study and they show that AI needs to advance within deeper controls and that there needs to be more oversight across regulations and ethics (2).
AI supercomputers
AI supercomputing platforms are expected to become more commonplace. Gartner is anticipating rapid growth in specialised AI compute stacks that combine orchestration software, high-bandwidth memory, and accelerators to run large models and simulations at scale (3). Research and Markets anticipates that this market will grow at a compound annual growth rate (CAGR) of 20.2% to $10 billion by 2029 (4).
What does this mean exactly? Well, they’re the engines of AI, run by AI, for AI. The more super the computer, the more capable it is of delivering reliable performance at scale for enterprise AI workloads. It helps companies operationalise AI and have the components needed to enhance deep learning, access large datasets, and optimise resource usage (among other things).
It’s also transformative. Think radically sped up drug design and discovery, and the ability to speed up the discovery of life-saving drugs at a speed never before imagined. NASA and IBM use a high-end supercomputer to train Prithvi-weather-climate, which uses AI and machine learning for storm tracking, forecasting and historical analysis, among many other things (5).
Automation and security
Intelligent process automation is a huge positive for many industries, specifically manufacturing and mining. It can be used to automate processes within complex environments, reducing risk and improving worker safety and well-being. These tools also take over the boring and mundane tasks that sit in the background, freeing up human capital to work on more complex and relevant tasks. AI is adding intelligence to automation, which is streamlining workflows, reducing errors and delivering rapid insights. It’s also changing how workers and companies respond to issues and compliance requirements.
As for security, well, it has become the new frontier for AI and automation. Proactive and intelligent solutions are leveraging AI to combat attackers who are also using the same tools to launch faster and more sophisticated attacks. Think deepfakes, adaptive malware and hyper-realistic phishing scams. Companies are rapidly adopting AI-driven security solutions to build defences that are capable of detecting and mitigating threats in real time, far faster than humans can achieve. However, these systems are working in collaboration with humans, where AI identifies and tracks threats, skilled security experts are essential to oversight, strategy and crisis management.
AI: The tools changing the world of work
It’s become very easy for companies to tap into the AI revolution. Companies such as Microsoft, Meta, Google, Anthropic, Amazon and Apple are all throwing plenty of investment at AI with the goal of becoming the leader in their respective spaces. However, new names and solutions have crept into daily use over the past year. These are the most exciting and powerful tools on the market today:
- Microsoft Copilot and Gemini for Workspace are among some of the leaders in suite-integrated copilots designed to step inside ecosystems and simplify them. They allow Google Suite and Microsoft 365 users to do more with tools they’ve been using for years, like Word, Sheets, or PowerPoint.
- Reasoning-focused models, such as Gemini and OpenAI thinking class tools, are capable of logical thinking and can plan multi-step work or break down goals and act as partners to their users. This has become known as agentic AI – workplace agents capable of moving from assisting users to doing things with and for them.
- GenAI assistants are being embedded in software tools and solutions, so in fields like customer support, these assistants help to resolve issues and reduce the admin burden placed on agents.
- Superagency tools are bringing the combination of reasoning, agents, models and automation into the workplace and giving workers the ability to chain different tools together to create their own solutions or automations.
Today, finding any single tool that uses AI is impossible, as almost every piece of technology has an AI interface within it. Now it comes down to how combinations of AI can help users do so much more with their lives and work.
AI: The blooper reel
AI is as susceptible to making mistakes as the people who code it. Over the past year, as companies have become more confident with their AI investments and strategies, there have been some real gems. Here are some of the best (and the worst):
There’s a running list of lawsuits that are being taken against AI developers and companies such as Perplexity and ChatGPT. They flag concerns around how the models were trained, plagiarism and copyright infringement.
DeepSeek, the Chinese AI chatbot that was named the ‘ChatGPT killer’ was subject to a major cyberattack, which resulted in extensive downtime and reputational damage. The platform’s impressive rise to the top of the popularity rankings made it a target, and it hasn’t quite recovered since then.
Hallucinations are on the increase. One of the most documented cases of how AI tends to invent reality is when Google’s own chatbot produced a factual error that caused the parent company, Alphabet, to lose $100 billion in market value.
The future of this technology may not be set in stone, but its use cases and careful consideration of bias and ethics should be. It’s a superb technology, but it must remain within the right boundaries.
Sign-up to our newsletter to get insights you can trust, delivered directly to your inbox.
(1) https://research.aimultiple.com/artificial-general-intelligence-singularity-timing/
(4) https://uk.finance.yahoo.com/news/ai-supercomputer-market-outlook-report-143500035.html
(5) https://www.earthdata.nasa.gov/news/blog/nasa-ibm-research-apply-ai-weather-climate




