2024 AI predictions

By - January 18, 2024

2023 was a landmark year that solidified generative AI’s place at the forefront of technological innovation. OpenAI, not one to rest on its laurels, unveiled GPT-4, a large language model that not only became multi-modal but also 10X more advanced than its predecessor GPT-3.5. The year’s drama peaked when OpenAI’s CEO, Sam Altman, faced a brief but turbulent ousting, only to be reinstated a week later amid threats of a massive employee walkout and significant influence by Microsoft and other investors.

The AI industry’s insatiable appetite for GPUs sparked a frenzy, with around 555,000 of Nvidia’s H100 GPUs shipped in 2023. Each of the advanced chips can cost upwards of $30,000 in US and more than double that in China due to US export control. This surge in GPU demand underscores a pivotal shift in the business and geopolitical landscape, with US and Chinese tech giants scrambling to manufacture their own GPUs, a strategic move to stabilize the increasingly volatile supply chain.

Google, after much anticipation, unveiled its Gemini model. Designed from scratch to be multi-modal, its debut was met with a mix of excitement and skepticism. The jury is still out on whether Gemini can truly outshine GPT-4, with rigorous performance tests still needed when Gemini Ultra becomes available.

In the financial sphere, the stock market soared by an impressive 20%, primarily driven by the transformative potential for generative AI to reshape the future of work. I think it’s safe to say that Nvidia was the big AI winner with a 220% increase in share price in 2023!

Regulatory gears also began turning, with the European Commission releasing a draft proposal of the AI Act — a risk-based framework to classify and regulate AI technologies — while the Biden administration issued an executive order building on an earlier in the year non-binding “AI Blueprint”. This new Order mandates that new high-level language models (LLMs) be reported to the Commerce Department plus undergo stringent safety tests and evaluations under the Defense Production Act.

As we bid adieu to a remarkable year in AI, we start off 2024, armed with high expectations and an unwavering curiosity. What does the future hold? Here are my predictions for 2024, diving into a year that promises to be as electrifying and transformative as the last.

Apple Enters the AI Fray with a Bang

In 2023, Apple maintained a low profile in the AI arena, allowing giants like Open AI, Meta, and Google to fiercely compete in the AI arms race to build the most advanced AI model. However, 2024 is poised to be a game-changer for Apple. With significant advancements in AI technology, specifically nano models optimized for mobile phone chips, Apple is set to revolutionize its devices. The much-anticipated iPhone 16 with IOS 18 is expected to emerge as an AI powerhouse, seamlessly integrating generative AI capabilities across mobile apps without relying on cloud connectivity (on-device). This includes a significant upgrade to Siri, transforming it into a more intelligent and responsive assistant that can handle more complex tasks. You may want to wait until September to upgrade your iPhone.

Behind the scenes, Apple is reportedly investing millions of dollars daily in AI research and development. This large investment is rumored to culminate in the launch of an “AppleGPT” chatbot, codenamed AJAX. Poised to rival Google’s Bard and OpenAI’s ChatGPT, AJAX symbolizes Apple’s commitment to not just entering the AI race, but leading it.

Apple’s product launch strategy has always been characterized by its conservative approach, focusing on releasing products that are not just innovative but also meticulously polished. This strategy suggests that when Apple does unveil its AI advancements, they will be both groundbreaking and highly refined, reaffirming their reputation for excellence in technology. In 2024, it’s clear that Apple is not just joining the AI party; they are set to be one of its most influential hosts.

Evolving Landscape of AI Copyright

OpenAI, along with other AI pioneers, has extensively utilized the vast resources of the internet to fuel its data-driven AI models. GPT-4, a notable example, was trained on a diverse range of online content, including Wikipedia entries, books, and billions of webpages. The model’s training is believed to have incorporated a staggering 100 trillion parameters (knowledge pieces), a feat that came with a hefty price tag exceeding $100 million.

However, the origins of this AI training data remain shrouded in mystery, often referred to as a “black box.” Stanford has done a nice job comparing the “transparency” of various AI models with its recently released Foundation Model Transparency Index (FMTI). This ambiguity (or lack of transparency) has raised significant concerns among content creators and publishers regarding copyright issues. They argue that their intellectual property is being used by companies like OpenAI to generate revenue through ChatGPT subscriptions and model APIs, without proper acknowledgment or compensation.

This brewing discontent recently escalated when The New York Times took legal action against OpenAI and Microsoft, accusing them of copyright infringement. The lawsuit contends that these companies are unfairly benefiting from the substantial journalistic efforts of The Times, creating competitive products without permission or financial remuneration.

The outcome of these AI copyright legal battles are poised to significantly influence the future of AI training data sourcing. In response to this shifting landscape, AI firms are starting to modify their approaches to data collection. For example, OpenAI recently launched its Data Partnership Program. This initiative aims to collaborate with various organizations to develop both open-source and private datasets for model training, ensuring a more transparent and ethical approach to data acquisition in the AI industry.

In 2024, we expect more lawsuits, more paid data partnerships between AI companies and content creators, and likely substantial legal payouts in the billions of dollars. In many ways, this period of time reminds me a lot of when Napster ran into copyright infringement issues with its peer-to-peer file sharing platform.

AI Image Generators and the Evolution of Text Recognition

For digital content creators like myself, AI image generators such as DALL-E 3 and Midjourney have become indispensable tools. These platforms enable me to create stunning, copyright-free images for various digital platforms, including blogs, presentations, and podcasts. Yet, despite their prowess, these AI-driven tools often stumble over seemingly simple tasks like accurately rendering text and numbers.

Humans possess an innate ability to decipher text symbols — letters, numbers, characters — in a myriad of fonts and handwriting styles. However, current AI image generators fall short in this area. They lack an intrinsic understanding of these symbols, often leading to errors in representation and meaning. As you can see from the image below, I seriously tried like heck to get the word “prediction” spelled right!

AI image generators sometimes fall short.

This limitation stems from a need for more extensive training data specifically focused on text recognition. The good news is that this shortcoming is likely to be addressed in 2024. I anticipate significant advancements in AI image generators that will enable them to accurately interpret and replicate text, bridging a crucial gap in their current capabilities.

While this technological leap is exciting, it also raises concerns about its impact on the graphic design industry. The fear is that as AI grows more proficient, it might encroach on the domain of professional graphic designers. The hope is that these advancements in AI will augment human creativity, not diminish it.

Microsoft Copilot Will Dominate the Enterprise

For those fortunate enough to have preview access through Microsoft’s early adopter program or the limited release for enterprise customers (300+ employees), the unveiling of Microsoft Copilot has been a revelation. Microsoft Copilot is uniquely “grounded” in private enterprise data, encompassing a range of content such as SharePoint files, OneDrive, Teams chats, emails, and Outlook contacts.

The magic happens through Microsoft’s Office Graph, a platform that serves as a bridge to your distinct data, relationships, and context. A process known as “pre-processing” blends your input prompts with data sourced from the Office Graph. This data is then processed by an advanced language model like GPT-4, ensuring responses are both personalized and compliant with data protection and privacy standards.

My personal experience with Microsoft Copilot for Microsoft 365 starts each morning with a simple yet powerful prompt: “Based on chats and emails over the last 24 hours, provide me with a list of the top 3 priorities that I need to address today.” This feature alone has proven to be an invaluable time-saver in addition to automating tasks like meeting availability, Teams meetings recaps, uncovering SharePoint data, and generating new documents based on previous work product.

Enhancing its capabilities further, Copilot Studio can integrate external data sources into the Office Graph, enriching the AI with broader, domain-specific knowledge. However, this cutting-edge technology doesn’t come without its challenges. Costing $30 per user per month, it requires expertise in configuration, data hygiene and prep, training and adoption, and business value analysis to “make the case” to decision makers. It’s not quite the Showtime Rotisserie “set it and forget it” from the late-night infomercial (I may be dating myself). To facilitate a successful rollout, RSM has developed a Jumpstart program, helping organizations optimize their use of Copilot.

Looking ahead, I predict Microsoft Copilot will become a significant driver of productivity and automation and for many organizations to bring AI out of the shadows. On January 15, 2024, Microsoft expanded availability of Copilot for Microsoft 365 to include small and medium-sized businesses through direct and CSP licensing models.

Don’t Expect Much from xAI’s Grok Chatbot: A Tongue-in-Cheek Take

As I peer into the crystal ball of 2024, let’s talk about Grok, the brainchild of xAI. Grok, with its snarky, anti-woke character, is kind of like that eccentric uncle at family gatherings – you’re not sure what to make of him, and he’s definitely not the guest of honor at corporate events. Let’s be real: Grok is unlikely to be the Tesla of AI chatbots, especially in the buttoned-up world of business.

Simply put, the tool’s limited functionality in a corporate setting is as glaring as a neon sign in a library. And let’s not even get started on the quality of its data source – Twitter. That’s like trying to find a needle of wisdom in a haystack of hashtags and trolls.

Now, about the price tag – $16 per user. In the world of AI, where you get what you pay for, Grok seems to be charging a premium for its sass and wit. However, if you’re looking for AI tools that don’t come with a side of attitude and are useful in a business environment, you might want to stick with the more reliable – and decidedly less cheeky – ChatGPT Plus and Microsoft Copilot. These platforms are like the dependable workhorses of the AI world, providing access to enterprise data sets and features that are relevant to businesses.

In summary, Grok might find its niche somewhere, possibly in the realm of entertainment or among those who appreciate a dash of sarcasm with their AI. But in the corporate world, where efficiency and professionalism are key, it’s likely to be as out of place as your eccentric uncle at a board meeting. So, for those serious about integrating AI into their business processes, it might be wise to leave Grok to its own devices and opt for the more useful AI offerings on the market.

Rise of the Agents: From Science Fiction to Reality

The mention of Agent Smith from “The Matrix” (my favorite movie) immediately evokes images of sentient computer programs with human-like appearances but mechanical behaviors. However, the AI agents predicted for 2024 will deviate from this Hollywood depiction. These agents are poised to be conversational and collaborative allies, not adversaries, integrating seamlessly into various AI applications to offer autonomous assistance to humans in accomplishing common tasks.

A prime example of this integration can be seen in the realm of travel planning. Currently, tools like ChatGPT can assist in creating travel itineraries tailored to individual preferences, family size, and duration of stay. But the next leap in AI technology – agents – is expected to transform these capabilities further. AI agents are predicted to not only plan but also execute tasks such as making restaurant reservations, booking travel and accommodations, all by linking to personal data sources such as bank accounts, travel preferences, memberships, and retailers. This represents a significant upgrade from the current, more fragmented interactions with AI chatbots.

These advanced agents are expected to streamline the process by reducing the need to switch contexts between various websites and applications, saving time and effort. By handling both the search and transactional aspects, they promise to bring a new level of convenience and efficiency to personal and business tasks.

However, with these advancements come valid concerns regarding data security and privacy. As these AI agents gain access to sensitive personal and financial information, the potential risk cannot be overlooked.

In conclusion, the development of AI agents is anticipated to be a significant step forward in making AI more practical and integrated into our daily lives. Yet, as we embrace these advancements, it’s vital to remain vigilant about the accompanying risks and ensure that security and privacy are prioritized. My recommendation is to take a measured approach especially when it comes to integration with your financial data.

Embracing AI Literacy: A Corporate Training Revolution in 2024

In corporate America, mandatory training sessions on security awareness and workplace harassment are commonplace. These sessions, while crucial, are often seen as compliance exercises that have a minimal impact on employee productivity. Instead, they serve primarily to minimize organizational risk. However, 2024 is set to usher in a significant shift in corporate training paradigms.

I anticipated that AI upskilling and literacy will become integral to mandatory training across many organizations. The focus of AI education won’t turn an employee into an AI prompt engineer overnight. Rather, the aim is to provide a foundational understanding of responsible AI usage. The training will highlight how AI can be leveraged in everyday tasks and encourage employees to actively seek and evaluate potential AI use cases.

The impact of this shift is more than just theoretical. It’s expected that such AI-focused training could lead to productivity improvements ranging from 10-40%. Again, it’s not about creating super users out of every employee but about enhancing their capability to use AI tools effectively in their roles. Insights from various CEO surveys echo this sentiment, placing AI training among the top priorities for organizations in 2024. While it’s a dense read, I recommend reading (or better yet summarizing with ChatGPT) a Harvard field study of knowledge worker productivity using AI tools.

Corporate AI training, while significant, is just the beginning. Individual initiative and curiosity play a crucial role in the learning curve. A wealth of resources, including AI-related business podcasts and free online AI training modules on YouTube and by Microsoft, Google and Amazon, are available for those who wish to extend their learning beyond corporate programs.

At RSM we’re facilitating this journey for clients with the introduction of the GPT Foundation Starter Kit. This Kit is designed with the wide range of professionals in mind, from executive-level leaders to IT experts, offering AI literacy workshops tailored to their specific needs. Additionally, internally we host AI Office Hours, an innovative platform where employees can discuss real-world AI scenarios and seek feedback and suggestions from peers. This not only fosters a culture of continuous learning but also ensures practical application of AI knowledge across different levels of the organization.

In conclusion, 2024 is poised to be a landmark year where AI literacy becomes a staple in corporate training agendas.

The Rising Tide of AI-Driven Disinformation

2024 is a crucial election year in the United States. My most concerning prediction is not existential AI risk (think Terminator), but the role of AI in politics, specifically in the proliferation of disinformation and misinformation. The ease and efficiency with which AI tools can now generate and spread false information pose a significant threat to the integrity of electoral processes.

AI technologies have simplified the creation and dissemination of misleading content, influencing voter perceptions at an unprecedented scale and cost-effectiveness. One particularly alarming development is the mainstreaming of deep fakes. These are sophisticated, AI-generated images, videos, and audio clips that can be produced rapidly, challenging our ability to discern real from fabricated content. We’re already witnessing the use of deep fakes in political campaigns, including presidential advertisements, raising serious concerns about their impact on public opinion and democracy.

Efforts to regulate deep fakes, such as labeling policies, are in place, but their effectiveness remains uncertain, especially in terms of identifying and penalizing violators. Recent decisions by major social media platforms, including X (formerly Twitter), Meta, and YouTube, to roll back policies guarding against hate speech and misinformation further exacerbate the situation. X, in particular, has significantly reduced its team that is focused on curbing misinformation, opting for a community-driven moderation approach. This trend of downsizing teams dedicated to monitoring and controlling misinformation across social media platforms will hinder our ability to combat the spread of false information.

I urge everyone to exercise utmost caution and responsibility. I know we all enjoy a fun gag every once in a while (see Pope’s puffy jacket), but please critically assess the credibility of online content before sharing it. Prioritizing caution and diligence over speed and entertainment is essential in preserving the integrity of our political discourse and democratic processes.

Beware of AI-generated images being used in deep fakes.

The AI Landscape in 2024

Looking ahead, the AI landscape presents both remarkable opportunities and significant challenges. With developments ranging from Apple’s new AI initiatives to the complexities of AI-driven disinformation, the impact of AI is undeniable.

The progress in AI image generation, the emergence of AI agents, and the growing need for AI literacy in workplaces illustrate the diverse applications of this technology. At the same time, these advancements highlight the importance of ethical considerations and informed usage.

In preparing this article, I asked ChatGPT to help me out with some editing along the way. Especially my tongue and cheek take on Grok! While the ideas and concepts are my own this collaboration exemplifies the evolving role of AI as a tool for enhancing productivity and creativity.

Learn more about RSM’s AI services by visiting our website

Diego Rosenfeld is a principal in RSM’s Boston office, serving as the national go-to-market leader for managed IT services (MITS) and a member of the RSM managed technology services leadership team. As go-to-market leader, Diego oversees MITS product strategy and regional and market-based client engagement teams. He works hand in hand with RSM industry teams to develop managed services that integrate our rich capabilities into scalable, industry-relevant offerings.

Receive Posts by Email

Subscribe and receive notifications of new posts by email.