Meta Platforms (NASDAQ:META) has undergone a massive restructuring effort after being in the news for “overspending” in order to build its artificial intelligence arm to catch up in the intensifying AI arms race. The company has been in the news for recently massive layoffs and its plans to split its newly formed Meta Superintelligence Labs (MSL) into four focused divisions: research, infrastructure, product, and superintelligence. This restructuring follows months of internal friction, underperformance of AI model development, and external pressure from rivals like OpenAI, Anthropic, and Google DeepMind. Led by newly appointed Chief AI Officer Alexandr Wang—brought in from Scale AI after Meta invested $14.3 billion in the firm—the overhaul reflects CEO Mark Zuckerberg’s readiness to reshape Meta’s organizational DNA. Yet, the revamp isn’t just structural. Meta is evaluating layoffs within its thousands-strong AI workforce and exploring use of third-party AI models, signaling a sharp departure from its historically insular development approach.
Precision Architecture: Four Divisions To Fix Execution Bottlenecks
Meta’s decision to disband its legacy AGI Foundations team and relaunch MSL under a four-group model is a direct response to internal execution challenges and missed development milestones. The new structure includes MSL Research (core innovation), MSL Infra (compute and hardware), MSL Product (user-facing integrations), and a superintelligence-focused MSL Ops. Each group now has a narrower mandate to enable faster feedback loops, reduce inter-team gridlock, and ensure resource alignment across Meta’s AI training and deployment lifecycle. Robert Fergus, a FAIR co-founder, returns from DeepMind to lead research; Aparna Ramani takes charge of infrastructure; and Alexandr Wang directly oversees foundation model work through the newly branded TBD Labs. By unbundling MSL into operationally distinct yet interlinked teams, Meta aims to sharpen execution and increase accountability—a problem that had plagued its AI org under a monolithic structure. The previous AGI Foundations team struggled to integrate research with product needs, while infrastructure decisions often lagged behind model training demands. Now, infrastructure is no longer a shared resource but a strategic lever managed independently, potentially speeding up GPU provisioning, hardware customization, and model deployment cycles. The product group can also iterate directly with Meta’s app ecosystem, testing Llama and generative AI tools in Instagram, Facebook, and WhatsApp more rapidly. The focused structure is built to scale and pivot—important as Meta balances fast shipping with long-term R&D. But this modularity also increases coordination complexity and raises the stakes for inter-group alignment. If successful, this realignment could resolve Meta’s development gridlock. If not, it could lead to organizational drift.
Talent Blitz: Nine-Figure Packages & Strategic Star Hires
Meta’s war chest has enabled it to mount one of the most aggressive talent landgrabs in Silicon Valley’s AI landscape. As the company battles to close the innovation gap with OpenAI and Google DeepMind, it has shelled out compensation packages reaching into the hundreds of millions to lure top researchers, sparking what insiders describe as a poaching war. Alexandr Wang’s hire as Chief AI Officer was both symbolic and strategic. Formerly the CEO of Scale AI, Wang now leads the superintelligence initiative and TBD Labs—the team behind Meta’s flagship Llama models. His appointment came shortly after Meta invested $14.3 billion in Scale AI, reflecting Zuckerberg’s conviction that founder-operators with AI scale-up experience are better positioned to lead moonshot projects. Meanwhile, other key leadership shifts include the reappointment of Robert Fergus to head MSL Research and the exit of Loredana Crisan, signaling a reshuffling of Meta’s generative AI leadership. Internally, these moves are framed as a bid to restore urgency and eliminate internal turf wars that had slowed model progress. The reality is that Meta now has one of the deepest AI benches in the world—but at a steep cost. The burn rate for elite AI researchers with stock-heavy comp structures is high, and unless Meta can deploy models into commercial products quickly, investor patience could wear thin. The new leadership structure also needs time to gel. High-profile hires may bring credibility and ideas, but execution across an org of this scale requires synchronized delivery—a weak spot for Meta in past AI efforts. Ultimately, talent alone won’t win the AI race; integration and velocity will.
Opening The Model Stack: From Proprietary To Hybrid AI Strategy
Meta is quietly walking away from its purist “build everything in-house” AI doctrine. Under pressure to commercialize features faster and offset stalled model performance, the company is actively exploring the use of external AI models—both open-source and licensed closed-source. Internally, this represents a major philosophical shift. Historically, Meta has positioned Llama as a best-in-class open-source alternative to closed models like GPT-4 and Claude. But recent stumbles in training and deployment have prompted leadership to rethink exclusivity. According to insiders, Meta is now evaluating hybrid sourcing—where in-house models power core experiences, while third-party systems are used to plug functionality gaps or accelerate launches. The approach echoes what enterprise software buyers want: performance, not ideology. Meta’s vast data infrastructure and user base make it an attractive deployment platform for third-party models, and the company has the financial leverage to negotiate favorable licensing terms. Open-source model integration also aligns with Meta’s stated commitment to democratizing AI—while buying time to refine its Llama roadmap. However, the hybrid model introduces new governance challenges. Who owns performance when multiple models underpin a single feature? How does Meta ensure compliance across licensing and data use? These are not trivial issues, particularly when integrating AI into messaging, social content, and ads—all of which have privacy, safety, and regulatory implications. Still, the hybrid model offers speed, flexibility, and fallback options—key advantages in a race where iteration cycles are collapsing and user expectations are soaring.
Downsizing Dilemma: Efficiency Gains Vs. Execution Risk
While the structural reorg aims to accelerate AI delivery, Meta is also weighing cuts to its sprawling AI workforce—currently numbering in the thousands. Although no layoffs were announced with the reorganization, sources suggest the company is reviewing overlapping roles, team redundancies, and potential realignment into other divisions. This is a risky but predictable move given Meta’s high cost base, mounting R&D spend, and pressure to show ROI from its AI investments. Leadership’s messaging has emphasized operational focus, but any downsizing risks collateral damage—especially when talent retention is already challenged by hyper-competitive recruiting across the Valley. Past restructurings in Meta’s Reality Labs and Ads teams show that cuts can derail morale and delay delivery timelines. Moreover, with AI workloads growing—model training, product testing, compliance, infra scaling—the organization needs capacity, not contraction. The key question is whether Meta can surgically reduce overhead without weakening its core execution engine. Already, the dissolution of AGI Foundations and reassignment of leaders like Ahmad Al-Dahle and Amir Frenkel point to uncertainty within the ranks. Meta may improve its operating margins and reduce redundancy, but the danger lies in undercutting delivery just as product teams begin integrating AI into user-facing experiences. The window to compete in generative AI is narrow, and if downsizing goes too far, Meta could lose ground before its reorg fully pays off.
Final Thoughts
Source: Yahoo Finance
Meta’s recent earnings have created a solid upward momentum for the stock price and the recent layoffs and AI reorganization efforts have not resulted in any major selloff. However, we believe that investor scrutiny of Meta’s AI efforts and their results will remain high—especially given Meta’s elevated valuation. As of August 20, 2025, Meta trades at an LTM EV/EBITDA of 19.95x and a forward EV/Sales of 8.90x. Its forward P/E sits at 26.48x, while its market cap to free cash flow multiple has surged to 58.95x. These multiples suggest significant expectations are already priced in for AI-driven upside. But Meta must now convert strategy into results. With competitors shipping faster and expectations rising, execution—not vision—will be the metric that defines Meta’s AI chapter.