December 13, 2023
by Nirmal Ranganathan, Distinguished Architect, Rackspace Technology
Together, these trends signal a future of amplified productivity, customizable intelligence, decentralized progress and proactive assistance.
In 2024, most software products will have some level of AI, whether a chatbot, natural language search, or generative capabilities for text, images and analytics.
Today, Microsoft® is laying the path with its Copilot suite for all Microsoft products. Google leads with Duet AI and Amazon will continue to expand their services with Amazon Q integration. SaaS platforms like Salesforce and ServiceNow have their own AI integrations. Development platforms like GitHub with their copilots and data platforms like Snowflake with Cortex are already leading the way for what comes next year.
A vital impact of the AI infusion will be enhanced productivity. With AI assistance built into tools, users can get information, take action and accomplish tasks more efficiently without needing to code or program manually. This will drive productivity improvements across several industries.
What’s more, integrating AI into software tools improves access to AI, allowing everyday users to benefit from the advanced capabilities of the applications they use regularly.
In the coming months, we’ll see AI literacy become democratized much faster than data literacy has been democratized.
While making data accessible and actionable by all users has been a focus for the past decade, relatively few people are truly data-aware and data-driven in their day-to-day roles. In contrast, the adoption of AI is happening almost overnight. As a result, more people will rapidly become AI-ready and comfortable using AI in their day-to-day compared to those who are data-savvy.
Driving this AI democratization is the integration of AI into virtually all software tools and platforms, making advanced AI functionality accessible without specialized skills.
Moreover, AI can help further data democratization by making it easier for users to interface with and extract insights from data. This will improve productivity as workers leverage AI to work more efficiently.
In 2024, we’ll see a renewal in the importance and adoption of open-source AI models. In the 2010s, open source was a significant focus in many technology circles. However, as cloud and SaaS solutions have matured over the past several years, open source has taken a backseat while proprietary models and commercial solutions have gained prominence.
We’ll see open source regain traction, particularly for large language models. Key innovations with open-source models, like Llama2 and Mistral, are becoming competitive with proprietary models from companies like OpenAI.
The open-source models have trended smaller in size, using fewer parameters. This makes them more cost-effective and power-efficient, paving the way for fine-tuned domain and task-specific models and making it easier for full deployment within private environments. Expect open-source models to rival commercial models in performance and possibly even surpass them in the coming months.
This shift aligns with AI’s facilitation of greater collaboration. Open-source models lend themselves to community-driven development and iterative learning, offsetting privacy concerns on training data sets and many more benefits.
Also, data privacy and residency requirements are more accessible with open-source models that can run entirely in private-cloud settings without external dependencies.
The next significant evolution in AI, particularly multi-modal large language models (LLMs), will accelerate the shift towards semantic programming for action-oriented use cases.
So far, LLMs have focused predominantly on information retrieval and content generation, like answering questions, summarizing documents and providing conversational capabilities. The primary focus for next year will be shifting from these reactive applications to more transactional, automation-driven applications.
Rather than just querying information or generating text, multi-modal LLMs/FMs will drastically improve the capabilities to interact with the models, enable new workloads, or even reimagine existing workloads that were impossible this year. This creates the shift to action-oriented tasks based on natural language instructions and semantic context.
For example, an LLM could create transactions, be that updating a database or calling APIs, making plans or optimizing workflows based on a user’s stated intent, without pre-programming rigid sequences of steps. This semantic programming would allow models to infer necessary actions despite variations in the order and presence of steps across related processes.
So, where the previous year saw LLMs supporting knowledge workers through content improvements, the next 12 months will unlock more impactful, prescriptive applications. This includes process enhancements, task automation, and leveraging AI assistants or agents to take actions versus just producing information. It marks a shift from reactive to proactive AI.
These AI improvements mean that in 2024, the software will improve to help people work faster and wiser. AI systems will understand users better — and, from there, more accessible, proactive AI can advance even newer innovations.
This is just a glimpse of a future where AI makes complex tasks simple, serves users’ needs better and takes initiative rather than just reacting.
If you’re at the outset of an AI journey, FAIR Ideate should be your first step. Talk to us today to accelerate the responsible adoption of AI across your organization.