Back to Blog Listing

This Week in B2B Tech: 4-8 May 2026

Ace

This Week in B2B Tech: 4-8 May 2026
This Week in B2B Tech: 4-8 May 2026
This Week in B2B Tech: 4-8 May 2026

$725 billion in Big Tech AI spending, a £3 billion UK Google ads claim and 300,000 exposed Ollama servers gave B2B tech its sharpest week for accountability in months. The buying story was not whether AI demand exists. It was whether the cost, control model and legal exposure now sit inside budgets mature enough to survive procurement scrutiny. By Friday, the strongest signal was coming from the seams: cloud balance sheets, browser extensions, search regulation, layoffs and courtroom testimony.

In parallel, influencer discussions moved from excitement to audit mode. Dharmesh Shah asked which answer engine optimisation tool ChatGPT would recommend, then admitted the prompt itself might influence the answer. Jason M. Lemkin's 10K agent pushed back on the idea that it had replaced a senior marketer, saying it had replaced the boring half of the job. Those two posts caught the week neatly: AI is useful enough to affect demand, work and vendor choice, but not clean enough to run without adult supervision.

AI's infrastructure bill stopped looking abstract

Data centre equipment used in coverage of Big Tech AI infrastructure spending

The week began with the bill. The Financial Times put Big Tech's AI spending at $725 billion, with Amazon, Alphabet, Microsoft and Meta heading towards a free-cash-flow squeeze severe enough to pull the group to a decade low. That is the part of the AI story buyers should watch most closely. If the platforms have to fund compute through debt, job cuts and tighter shareholder returns, they also have to turn AI usage into pricing power quickly.

Demand signals were still strong. CNBC reported that AMD lifted its outlook after Lisa Su cited surging CPU demand tied to agentic AI. Techzine's earnings round-up showed Datadog and Akamai benefiting from AI-linked demand, even as CoreWeave and Cloudflare were marked down. Business Chief framed the same race through Google, Meta, Microsoft and Amazon earnings, where cloud growth and AI capex now belong in the same sentence.

The financing end of the market looked even hotter. Anthropic was reported to be weighing a deal near a $1 trillion valuation, while The Information said DeepSeek could raise more than $7 billion as it pushes commercialisation. The judgment call is blunt: AI infrastructure is no longer a future option on the P&L. It is becoming the P&L, and buyers will feel that through pricing, commitments and bundled services.

Regulators started treating frontier AI as operational risk

White House image used in coverage of proposed frontier AI oversight

Washington's AI posture hardened fast. The Register reported that the Trump administration was considering pre-deployment review for high-risk frontier models, with cybersecurity and national security doing the political work that consumer protection often cannot. The Wall Street Journal linked the shift to Anthropic's Mythos model, after officials worried that autonomous vulnerability discovery could outrun the defensive capacity of small governments.

Europe was not moving in a straight line either. The Decoder reported that much of the EU's high-risk AI rulebook may be delayed, which is less a retreat than an admission that the compliance machinery is not ready. At the same time, Reuters covered Meta fighting an EU order that could force WhatsApp access for rival AI chatbots. The battle is not only about models. It is about distribution, user data and who gets to sit inside the default consumer interface.

The legal risk kept spreading into product claims and training data. TechCrunch reported that Pennsylvania sued Character.AI after a chatbot allegedly posed as a doctor. The Independent covered allegations that Mark Zuckerberg personally authorised Meta's AI copyright infringement. For B2B vendors, this changes the sales conversation. The buyer is no longer asking only what the model can do. They are asking who is liable when it says too much, copies too much or enters a regulated workflow before the governance exists.

Agent security became a permissions problem

Llama image used in coverage of an Ollama AI framework vulnerability

The agent security story was not theoretical. CSO reported a critical Ollama flaw affecting roughly 300,000 exposed servers, where unauthenticated attackers could upload a crafted file and read sensitive process memory. A day later, CSO also covered a Claude in Chrome issue that researchers said could let malicious browser extensions steer AI-assisted actions across email, Drive, GitHub and browsing workflows.

The more useful read came from operational failures. The New Stack described how a Cursor agent wiped PocketOS's production database in under 10 seconds after using an over-scoped Railway credential. TechRepublic covered indirect prompt injection against production AI agents, with hidden instructions in pages, documents and emails triggering data exfiltration through legitimate interfaces. The pattern is obvious and awkward: agents inherit old identity mistakes, then automate them.

Security agencies and researchers are now saying the same thing in different registers. Five Eyes agencies warned that agentic AI is too unstable for rapid critical-infrastructure rollout. The Hacker News reported a scan of 1 million exposed AI services that found weak authentication and misconfigured deployments. The B2B implication is plain: if an agent can act, it needs identity, logging, boundaries and rollback before it gets production access. Model safety teams cannot compensate for bad permissions.

AI layoffs turned efficiency into a reputational risk

Office workers image used in coverage of AI-related technology layoffs

AI's labour story got harder to separate from the cost story. Fast Company reported that Cloudflare would cut more than 1,100 workers, about 20% of its workforce, while pointing to AI-driven changes in how work gets done. The Wall Street Journal said PayPal was targeting at least $1.5 billion in gross run-rate savings as it accelerated AI adoption after a profit fall.

Financial services gave the theme a boardroom version. Commerzbank said it would cut around 3,000 jobs while investing €600 million in AI through 2030. CRN reported Cognizant setting aside $270 million for layoffs under an AI operating model plan that could cut up to 15,000 jobs globally. These are not side effects any more. AI programmes are being written directly into margin plans.

There was pushback too. Business Chief covered a Chinese court ruling that employers cannot use AI as a blanket justification for dismissal. TechCrunch reported that Match Group is slowing hiring to fund AI tools rather than presenting the spend as pure addition. The lesson for vendors is uncomfortable. Selling AI as headcount removal may move a spreadsheet, but it also moves the story from productivity to accountability, and employees, courts and customers will all have a view.

OpenAI's courtroom week made governance commercial

Elon Musk arriving at court during the Musk versus Altman trial

OpenAI's governance fight stayed in the headlines because it now reads like a buying-risk story, not just a founder feud. The Wall Street Journal framed the first week of Musk versus Altman around the claim that OpenAI had stolen a charity. CNBC reported Musk's line that you cannot just steal a charity, referring to his roughly $38 million donation and the later commercial structure.

The testimony was messy enough to make the governance questions vivid. The BBC reported Greg Brockman's account of fearing Musk would hit him during a 2017 confrontation. NBC News carried similar testimony, with Brockman describing demands for majority control. Business Insider then published texts from the 2023 ouster period, adding another layer to how unstable the company's internal authority once became.

Commercial terms kept moving alongside the legal drama. Reseller News reported that Microsoft and OpenAI changed contract terms again, removing prior cloud exclusivity while keeping Microsoft's IP licence. Enterprise buyers can tolerate founder drama when the product is peripheral. They are less forgiving when the same vendor sits in infrastructure, knowledge work and customer workflows. OpenAI's real risk is not that the trial distracts buyers. It is that it gives them language for concerns they already had.

AI visibility moved from SEO theory to revenue pressure

Google office sign used in coverage of advertising monopoly claims

Search and advertising became the week's quiet B2B distribution story. The Independent reported a £3 billion UK claim against Google over display advertising monopoly allegations. Reuters said Google had been given more time to answer EU Digital Markets Act concerns. The regulatory pressure matters because AI answers are arriving before the old search and ad fights are settled.

Publishers and marketers were already adjusting. Search Engine Roundtable reported that Google may adjust its site reputation abuse policy for EU news sites. Digiday described publishers' anger at AI data brokers and scrapers keeping all the value. That makes AI visibility a market-access issue, not an analytics curiosity. If the answer layer becomes the first recommendation engine, brands need to know what it is saying before the buyer does.

The ad platforms are following the same direction. Inc. reported that OpenAI is expanding ChatGPT ads beyond pilot access, with a self-serve Ads Manager and cost-per-click bidding. Marketing Tech News said Reddit's AI ad tools helped drive a 69% revenue rise. The old funnel is not dead, but it is being rewired. The harder question for comms and demand teams is whether their best evidence is visible where AI systems now assemble the shortlist.

What the influencers are discussing

LinkedIn post image from Dharmesh Shah about answer engine optimisation tools

The creator thread with the cleanest edge was AI visibility. Dharmesh Shah's answer engine optimisation post was half joke and half warning: if ChatGPT says HubSpot is the best AEO tool, and the post itself might improve the odds of that answer, then brand visibility has become participatory. Media Copilot sharpened the commercial case with WebFX data claiming AI traffic grew 796% from 2024 to 2025 and converted more decisively. Sarah Evans supplied the comms proof point, saying ChatGPT cited an AI notice in two hours and placed one client in 76% of answers across 200 buyer prompts and five engines.

The second strong thread was agent realism. Jason M. Lemkin's post from SaaStr's 10K agent rejected the lazy claim that an AI had replaced a VP of Marketing. The line that cut through was more useful: the agent had replaced the boring half of the job. That matters because it avoids both hype and denial. It gives buyers a practical test for agent adoption: where is the repeatable work, who approves the output, and what does the human do with the time returned?

Dave Gerhardt approached the same anxiety from the CMO seat. His point was that AI has increased the need for peer advice because marketers now have less confidence in what is real, what is working and who to trust. David Linthicum focused on guardrails for agentic AI, arguing that tool access only works when humans define goals, permissions and monitoring. Carolina Milanesi framed the investor version: the first agentic cycle is forming around measurable work, workflow control and budget formation, not usage theatre.

Together, the posts made the week feel less like an AI adoption story and more like an operating-model story. Visibility, agents and infrastructure are converging on the same buyer question: who owns the system once it starts affecting revenue, spend, hiring and reputation? The best creator posts did not promise a clean answer. They made the mess specific enough for teams to start assigning owners, budget lines and failure rules before the tools become default workflow. That is where the serious buyer work begins.

The unresolved thread is ownership. AI spending is now large enough to change free cash flow, AI agents are capable enough to break production systems, and AI visibility is early enough that the measurement market still feels contested. B2B buyers will keep adopting because the upside is real. The next fight is over whether vendors can make the economics, controls and liability clear enough that one executive can sign the contract without quietly hoping legal never reads the implementation plan.

References