TECH NEWS | AI agents, search shifts, media deals define 2026 outlook

0

Insights drawn from reporting by Mobile World Live show an industry converging around agentic AI, fine-tuned models and measurable outcomes.

Hands of robot and human touching on global virtual network connection future interface. Artificial intelligence technology concept.

Hands of robot and human touching on global virtual network connection future interface. Artificial intelligence technology concept.

Artificial intelligence is entering a more pragmatic phase, shaped less by spectacle and more by integration into everyday business workflows, media platforms and consumer behavior. From telecom operators rethinking enterprise AI stacks, to media giants cautiously licensing generative tools, and regulators tracking how people now search for information, several developments point to how AI will function in 2026 rather than simply how impressive it looks today.

Insights drawn from reporting by Mobile World Live, including analysis by Mike Robuck and Amiya Johar, show an industry converging around agentic AI, fine-tuned models and measurable outcomes — alongside growing unease about what these shifts mean for creators, publishers and digital ecosystems.

Smaller models, smarter workflows

AT&T chief data officer Andy Markus believes the next wave of enterprise AI will not be dominated by ever-larger language models alone, but by a combination of large models and fine-tuned small language models working together inside agentic systems. In a company blog post, Markus outlined how these smaller, purpose-built models will handle specific tasks within broader AI workflows, delivering accuracy and efficiency that generalized systems struggle to achieve.

Large language and reasoning models, he explained, will increasingly act as orchestration layers — managing intent, routing tasks and overseeing decision flows. The actual work, however, will often be executed by fine-tuned small language models trained on narrowly defined datasets. According to Markus, this approach allows companies to unlock more value from their own data while keeping costs, latency and errors under control.

This architecture is closely tied to the rise of AI-fueled coding. Markus said software development cycles that once took weeks can now be reduced to minutes, enabling developers to assume multiple roles across the product lifecycle, from ideation to deployment. Nontechnical teams are also beginning to use plain-language prompts to generate prototypes, lowering the barrier to application development across organizations.

Inside AT&T, AI-driven coding tools are already being used to curate data products in under 20 minutes, while still complying with internal standards for security, quality and regulatory compliance. Markus expects this capability to accelerate the development of on-demand applications, where AI agents can adapt software behavior in real time rather than relying on traditional release cycles.

He added that telecom operators are well positioned to offer AI services such as fine-tuning to enterprise customers, citing their long-standing relationships with cloud providers and software partners. By 2026, Markus expects AI performance metrics — particularly accuracy, cost efficiency and speed — to become central benchmarks across industries. Using AI, he argued, will no longer be enough; organizations will be judged on how well they optimize it.

Media experiments with guardrails

While enterprises race ahead with internal AI adoption, media and entertainment companies are moving more cautiously. The Walt Disney Company recently agreed to invest $1 billion in OpenAI and signed a three-year licensing deal that makes Disney the first major content partner for OpenAI’s short-form video platform powered by the Sora text-to-video model.

Under the agreement, users will be able to generate short, prompt-driven videos featuring characters from Disney, Marvel, Pixar and Star Wars, complete with costumes, vehicles and familiar environments. Notably, the deal excludes the use of actors’ likenesses and voices, a clear signal of sensitivity to ongoing concerns from performers’ unions about generative AI.

Selected fan-created videos are expected to be made available on Disney+, while Disney itself plans to use OpenAI’s APIs to develop new products and experiences. The company also intends to deploy ChatGPT internally for employee use. Disney CEO Robert Iger described the partnership as a way to explore new forms of storytelling through generative AI while maintaining safeguards to protect creators and intellectual property.

The deal comes as OpenAI continues discussions with other major studios, though industry hesitation remains. According to reports cited by Mobile World Live, some media companies remain wary of partnering too closely with AI platforms amid unresolved questions about control, attribution and long-term value. The Disney agreement is still subject to final approvals and closing conditions, underscoring the cautious tone even behind headline-grabbing investments.

Search behavior quietly shifts

Beyond boardrooms and studios, AI is also reshaping how ordinary users navigate the internet. Research from UK regulator Ofcom shows that generative AI tools are rapidly altering search behavior, with OpenAI’s ChatGPT emerging as the most significant AI-native challenger to traditional search engines.

ChatGPT recorded 252 million visits in the UK in August 2025 alone, contributing to 1.8 billion visits in the first eight months of the year. That figure represents a 156 percent increase year on year, according to Ofcom’s Online Nation 2025 report. Competing tools such as Google’s Gemini, Anthropic’s Claude and Perplexity also saw strong growth, though from smaller bases.

Ofcom observed that search habits are shifting away from link-based results toward conversational, AI-generated summaries. Early evidence suggests that users who rely on these summaries are less likely to click through to original sources, raising concerns about declining traffic for publishers. At the same time, AI-generated responses are becoming mainstream. Around 30 percent of keyword searches now return AI-supported answers, encouraging passive adoption by users who may not actively seek out AI tools.

Looking ahead, the regulator expects generative AI to become an increasingly integral component of search, driven by rapid technological advances, heavy investment and sustained user uptake. These trends are unfolding alongside broader changes in online behavior. UK adults now spend an average of four and a half hours online each day, with more than half of that time spent on platforms owned by Alphabet and Meta. While Meta leads in total time spent across its apps, YouTube remains the single most-used platform, even surpassing Google Search in reach.

A more measured AI future

Taken together, these developments suggest that AI’s next phase will be defined by integration, measurement and negotiation rather than raw novelty. Enterprises are narrowing their focus to workflows that deliver clear value. Media companies are experimenting within tighter boundaries. Regulators are beginning to quantify how AI changes everyday digital behavior.

As 2026 approaches, the question is no longer whether AI will be embedded into systems, platforms and habits. It is how deliberately — and how responsibly — that embedding will be done.

————————————————————————-

WATCH TECHSABADO ON OUR YOUTUBE CHANNEL:

WATCH OUR OTHER YOUTUBE CHANNELS:

PLEASE LIKE our FACEBOOK PAGE and SUBSCRIBE to OUR YOUTUBE CHANNEL.

PLEASE LIKE our FACEBOOK PAGE and SUBSCRIBE to OUR YOUTUBE CHANNEL.

roborter
by TechSabado.com editors
Tech News Website at  | Website

Leave a Reply

Your email address will not be published. Required fields are marked *