Back to Blog Listing

This Week in B2B Tech: 11-15 May 2026

Ace

Elon Musk arrives at court during the Musk versus Altman trial

A $255.5 billion AI funding quarter, a $134 billion courtroom demand and 1,862 exposed MCP servers set the tone for B2B tech this week. The market did not run out of AI demand. It ran into the harder question of who carries the cost when AI moves from trial budget to legal claim, security exposure, public-sector service or workforce plan. By Friday morning, the week was less about new models and more about control.

In parallel, influencer discussions were circling the same move from excitement to proof. Gergely Orosz called out AI-generated LinkedIn engagement as reputation damage, while HubSpot Marketing framed ChatGPT as the buyer's new product-discovery gate. Christopher S. Penn added a measurement warning around personalised AI answers. The best posts did not reject AI. They asked whether brands, operators and vendors know what they are giving up when automation becomes the default interface, and whether the dashboard can keep up.

Elon Musk arrives at court during the Musk versus Altman trial

OpenAI's week ended in front of a nine-person advisory jury. CNBC reported that closing arguments had finished in Musk v. Altman, with jurors due to deliberate next week before Judge Yvonne Gonzalez Rogers decides liability. Bloomberg put the claim in sharper commercial terms: Musk is seeking remedies that include up to $134 billion, leadership changes and a reversal of OpenAI's 2025 restructuring.

The court story would be easy to dismiss as founder theatre if it were not colliding with harder liability claims. Reuters reported that a federal judge held back approval of Anthropic's proposed $1.5 billion authors settlement, asking for more detail on fees and payments. Another Reuters report covered a wrongful-death lawsuit against OpenAI brought by the family of a Florida mass-shooting victim, alleging ChatGPT helped the attacker plan. OpenAI denies wrongdoing.

Commercial dependency then completed the picture. CNBC said trial testimony showed Microsoft feared becoming too dependent on OpenAI, even as its OpenAI-related spend and infrastructure commitments exceeded $100 billion. Quartz reported that OpenAI was preparing legal action against Apple over the ChatGPT iPhone deal. The judgment for buyers is uncomfortable: AI vendor risk is no longer buried in model cards. It now sits in contracts, indemnities, governance minutes and partner concentration.

Agent security stopped being a lab problem

Server room cabling used in coverage of exposed AI agent infrastructure

The security signal was blunt. The Hacker News reported that attackers probed PraisonAI within four hours of an authentication-bypass disclosure, targeting a legacy Flask API server that disabled authentication by default. CSO then reported 1,862 public MCP servers exposed without authentication, with tested systems allowing anonymous tool listings and, in some cases, access to sensitive production services.

Criminals are also learning to use AI on their side of the ledger. NBC News reported that Google disrupted hackers using a large language model to find a previously unknown vulnerability capable of bypassing two-factor authentication. Bloomberg covered AUSTRAC's warning that money launderers are using AI to fabricate identities, forge documents and disguise scam proceeds.

The enterprise version is just as worrying because it uses trusted workflows against themselves. The New Stack described live-off-the-agent tactics that hijack a user's own AI agent through prompts, emails or MCP-layer compromises. Retail Banker International reported the FCA warning that AI is reshaping financial crime. For security teams, the old perimeter story looks thin. If agents can read, write, approve and act, authentication is only the start. Enterprises need policy, logging, scoped permissions and a rollback plan before the agent gets near production.

Regulators moved from AI principles to control points

Microsoft office signage used in coverage of a UK business software competition probe

Competition authorities are now reading AI through market power. Computerworld reported that the UK CMA opened a Strategic Market Status probe into Microsoft's business software ecosystem, covering Windows, Office, Teams, databases, security products and Copilot. The question is whether defaults, bundling and AI integration make it too hard for customers to move.

Europe's sovereignty push added a second pressure point. Computerworld said the European Commission is considering rules that could restrict US cloud services for sensitive public-sector data in healthcare, finance and the judiciary. Reuters reported Meta offering rival AI chatbots one month of free WhatsApp business API access in the EEA while it negotiates with EU antitrust officials. Distribution, data location and AI access are becoming the same policy fight.

Agent governance is also getting more explicit. The Register reported China's draft AI-agent policy, which would require humans to retain final decision-making power over autonomous actions. TechInformed covered CISA and G7 guidance on minimum AI SBOM elements, including model identity, dataset properties and infrastructure dependencies. The direction of travel is clear enough: regulators are not waiting for a single AI rulebook. They are reaching for the control points buyers already understand, procurement, supply chain, identity, data residency and competition.

AI capital kept flowing, but the centre of gravity shifted to infrastructure

Data centre racks used in coverage of AI venture funding

The funding numbers were still huge. PitchBook reported that AI startups raised $255.5 billion globally in Q1 2026, already passing the whole 2025 AI venture total. Three mega-deals, OpenAI, Anthropic and xAI, accounted for $172 billion of that, or 67.3% of the capital. This is no longer a normal venture cycle. Frontier AI finance is starting to look like infrastructure finance.

Public markets reinforced the point. The Wall Street Journal reported Cerebras raising its IPO price range and seeking up to $4.8 billion as demand shifts from model training to inference. Bloomberg said Amazon's AI momentum had added $438 billion in market value this year, helped by AWS growth and Trainium chip commitments. Hardware, cloud and inference economics are where the market is looking for proof.

Enterprise software is being pulled into the same capital logic. Tech.eu reported n8n's valuation doubling to $5.2 billion after SAP's strategic investment, alongside a commercial deal to integrate n8n into Joule Studio. Reuters reported Alibaba's cloud and AI unit growing 38%, even as total revenue missed estimates. Buyers should read the week as a pricing warning. If the AI stack needs chips, cloud, workflow integration and capital at this scale, somebody has to pay for it. The invoice usually lands in enterprise packaging.

AI work redesign became a hiring story and a layoff story

Office team image used in coverage of a rebound in technology job postings

The labour market told two stories at once. Network World reported 271,483 new US technology job listings in April, a three-year high, with more than 575,000 active postings and 18,138 AI-engineer roles. AI demand is still creating work, especially across software, systems and cybersecurity.

At the same time, AI is being written directly into restructuring plans. SecurityWeek reported Cloudflare laying off more than 1,100 employees in an AI-driven restructuring, even after beating Q1 forecasts. Technology Magazine said Cisco plans to cut about 4,000 jobs while shifting spend towards AI growth priorities. The market can hire for AI and cut for AI in the same week because the skill mix is moving faster than the org chart.

The social and legal risks are becoming harder to separate from the efficiency pitch. The Guardian reported a Chinese court ordering compensation for a worker replaced by AI. A separate Guardian piece looked at tech's AI-fuelled manager purge, with companies flattening management and asking tools to carry more coordination work. The serious buyers will not ask whether AI reduces headcount in a spreadsheet. They will ask which work disappears, which work gets riskier, and who is accountable when the missing manager used to catch the mistake.

What the influencers are discussing

Illustrative image from creator coverage of enterprise AI agents and workflow security

The strongest creator thread was not model quality. It was visibility. HubSpot Marketing put the buyer problem plainly: when someone asks ChatGPT, Gemini or Perplexity about your product, the answer engine gives one answer, not a page of options. Kipp Bodnar and Kieran Flanagan treated the same issue as a practical B2B marketing problem, asking how websites get recommended by ChatGPT in the first place. The useful point here is not that SEO is dead. It is that search visibility and brand trust are merging inside the same answer box.

Measurement was the second pressure point. Christopher S. Penn warned that generic AI brand-visibility tools could lose usefulness quickly as Google personalises AI-powered search results. That is a sharp caution for comms teams. If answers change by user history, market, intent and source mix, a monthly brand snapshot will not be enough. Sarah Evans brought the PR angle, describing a press-release experiment that she said ChatGPT cited in two hours. Her larger argument is that earned media is becoming machine-readable demand infrastructure.

A different set of posts pushed back against lazy automation. Gergely Orosz called AI-generated LinkedIn comments "full-on AI slop" and framed them as professional reputation damage. Liz Rice made the human version of the same complaint, saying people come to LinkedIn to hear what others think, not to read interchangeable computer-polished updates. Jason M. Lemkin, by contrast, leaned into AI-operated marketing work, claiming his AI VP of Marketing had shipped campaigns with no human in the loop. The split is useful. AI output is acceptable when the work is bounded, reviewed and tied to a real job. It becomes corrosive when it pretends to be human judgement.

David Linthicum supplied the sceptic's frame across several posts, warning that the technology industry's love of the term "agentic AI" risks turning useful autonomy into another vendor slogan. That criticism matched the news week neatly. Buyers are not short of AI products. They are short of evidence, controls and trustworthy signals. The creator conversation is moving from adoption to operating discipline, and that is exactly where budgets, PR strategy and vendor selection now meet.

The unresolved thread is ownership. AI can raise capital at a scale that strains normal venture logic, find vulnerabilities faster than defenders can patch, reshape hiring plans and rewrite how buyers discover vendors. None of those shifts is waiting for a tidy governance model. The next few months will test whether boards can assign clear owners for AI cost, security, reputation and legal exposure before procurement turns those questions into pass-or-fail requirements.

References