In today’s world, it is harder than ever before to predict what is going to happen in the future. Any technology-related predictions longer than 2 years are very hard to make and it would be rather risky if not foolish to heavily rely on them.
Let’s focus on the current year and have a look at what we can expect when it comes to AI.
It’s very safe to state that a lot will happen in the coming 12 months. What is absolutely for sure is the fact that AI is here to stay and it will further evolve in giant leaps.
In 2024, the initial cultural fascination with early generative AI will start to transfer into expectations for tangible business results. This technology, which includes the ability to process and generate text, voice and video content, is revolutionizing how companies enhance productivity, foster innovation and stimulate creativity. Companies will embrace AI even more heavily in 2024 and incorporate it into their products, services, internal processes, and communication with customers. Individuals will get used to dealing with AI and leveraging it to make their lives easier. National governments and international organizations will need to take AI very seriously and consider what it means to national and international security.
Here are several hot AI topics for 2024:
1. Customize chatbots
In 2024, tech companies that invested heavily in generative AI will be expected to prove that they can sell their products and make a profit. To do this, AI giants Google and OpenAI are betting big on going small: they are developing user-friendly platforms allowing people to customize powerful language models and make their own mini chatbots that cater to their specific needs—with no coding skills needed. Both have launched web-based tools that allow anyone to become a generative AI app developer. This means that generative AI might become useful for the regular, non-tech person. In 2024, we are going to see more and more people experimenting with an immense number of AI models.
State-of-the-art AI models, such as GPT-4 and Gemini, are multimodal, meaning they can process not only text but images and even videos. This new capability could unlock a huge number of new apps. Obviously, it will hugely depend on whether these models work reliably. Language models may put together untrue statements, other models may be easy to be hacked. These are problems to be still solved.
2. Generative AI’s second wave will be video
In 2022, the first generative models to produce photorealistic images exploded into the mainstream. Some tools create ‘wow’ images such as the pope in a Balenciaga outfit, Obama and Biden jointly walking in Barbie-pink suits, and similar standing-out pictures.
The 2024 theme will be AI videos which will take it even a step further in both ways – fun and amazing visuals as well as seriously incorrect, sexist, faulty and otherwise harmful materials.
The technology has improved significantly over the past year. A year ago, the results of video-generating tools were rather distorted and jerky. This year will bring further improvements. This will have major impacts on big film studios but also for example marketing firms and creative agencies. Limits of what’s possible will be pushed further one more time.
3. Robots that multitask
Inspired by some of the core techniques behind generative AI’s current boom, roboticists are starting to build more general-purpose robots that can do a wider range of tasks.
The last few years in AI have seen a shift away from using multiple small models, each trained to do different tasks—identifying images, drawing them, captioning them—toward single, monolithic models trained to do all these things and more.
Multimodal models, like GPT-4 and Google DeepMind’s Gemini, can solve visual tasks as well as linguistic ones. The same approach can work for robots, so it wouldn’t be necessary to train one to flip pancakes and another to open doors: a one-size-fits-all model could give robots the ability to multitask.
However, developing such multi-task robots requires a huge amount of data that needs to be properly processed. Such robots need to learn and train which requires an immense amount of data, including visual data such as images and videos. Driverless cars are one of the examples of such AI robots. Producers of such robots are fully aware that the consequences of not properly working products can be disastrous and possible liability arising from such malfunctioning may become fatal for these companies.
4. AI-generated election disinformation will be everywhere
The year 2024 will bring some major elections, of which the US presidential election is likely the most significant one. AI-generated election disinformation and deepfakes are going to be a huge problem. One example from 2023 for all. In Slovakia, deepfakes of a liberal pro-European party leader threatening to raise the price of beer and making jokes about child pornography spread like wildfire during the country’s elections.
It is rather hard to say how much these examples have influenced the outcomes of elections, their proliferation is however a worrying trend. In an already inflamed and polarized political climate, this could have severe consequences.
5. AI legislation and national security
Recognizing which photograph is real and which is created by AI, or which text has been written by a copywriter and which by ChatGPT has become a mission impossible. Technology has advanced tremendously and national or international legislations are lagging behind.
The European Union is a worldwide front-runner when it comes to the regulation of AI. The representatives of the member states of the EU agreed on the final compromise wording of the act on artificial intelligence. The act still needs to be formally approved by the European Parliament, which will probably happen in April 2024. The act consists of classifying technologies according to the way they are used and introducing different levels of restrictions according to the risk they entail. Some systems may be banned completely within the EU, while other tools such as facial recognition algorithms would be subject to strong regulation.
6. AI safety and ethics
As AI becomes more integrated into our lives, professional and private, the focus on AI safety and ethics intensifies. It is not only up to national governments to look after the safe, legal and ethical use of AI. Leading AI organizations are collaborating to develop robust AI systems with standardized safety protocols and best practices to help ensure ethical AI usage. Companies and other organizations embracing AI should also take full responsibility for the proper and ethical use of these tools.