HN Summaries - 2026-04-28

Top 10 Hacker News posts, summarized


1. Microsoft and OpenAI end their exclusive and revenue-sharing deal

HN discussion (688 points, 623 comments)

Unable to fetch article: HTTP 403

The Hacker News discussion highlights key points about Microsoft and OpenAI ending their exclusive partnership and revenue-sharing agreement. Microsoft will no longer pay OpenAI a share of its revenue, and the partnership is no longer exclusive, allowing OpenAI to sell its models through other cloud providers like Google Cloud, which may gain a unique advantage by potentially reselling all three major lab models (OpenAI, Anthropic, and Google). Microsoft remains OpenAI's primary compute provider via a $250 billion Azure services contract, though it loses its right of first refusal. OpenAI still owes Microsoft money until 2030 but with a capped liability, and Microsoft retains its $132 billion stake. Reactions are mixed: some view it as a strategic shift to avoid kneecapping OpenAI amid competition from Anthropic, while others interpret it as Microsoft "shafting" OpenAI or a "breakup" disguised as a "simplification." The announcement's framing and references to AGI are widely criticized as hype, and skepticism persists about binding terms versus PR.

2. GitHub Copilot is moving to usage-based billing

HN discussion (486 points, 374 comments)

GitHub will transition Copilot plans to usage-based billing on June 1, 2026, replacing premium request units (PRUs) with GitHub AI Credits consumed based on token usage (input, output, cached tokens). Base plan prices remain unchanged (Pro: $10/month with $10 credits, Pro+: $39/month with $39 credits, Business: $19/user/month with $19 credits, Enterprise: $39/user/month with $39 credits). Code completions/Next Edit suggestions remain free, but Copilot Chat, code review (which also consumes GitHub Actions minutes), and agentic features will consume credits. Annual plan subscribers face model multiplier increases on June 1 but transition to monthly plans upon expiration. Businesses gain pooled credits and admin budget controls. The change aims to align pricing with rising inference costs from agentic usage.

HN commenters reacted strongly to the price increases, particularly the dramatic model multipliers (e.g., 1x to 6x for GPT-5.4, 1x to 27x for Opus), arguing this makes Copilot less competitive against direct API purchases (e.g., OpenRouter, Claude Code). Many expressed skepticism about the value proposition, questioning the lack of token-per-dollar discount compared to alternatives and noting the end of "subsidized inference." Concerns included the impact on predictable costs for small businesses, potential abandonment of annual plans, and the legality of terms changes. Alternatives like Cursor, Windsurf, and Copilot CLI were mentioned, with debate on whether GitHub's integration justifies the per-token cost.

3. Men who stare at walls

HN discussion (390 points, 193 comments)

The author describes a routine for improving focus and productivity that involves avoiding screens and entertainment during work and staring at a wall for several minutes when feeling mentally drained. This method is proposed as a solution to the problem of modern information overload, which the author quantifies using a 2012 study showing a daily intake of 34 GB of data in 2008, extrapolated to 87 GB today. The author details their personal experience with a "brain fog" cycle driven by caffeine and media consumption, and explains that wall staring, particularly when combined with techniques to activate the parasympathetic nervous system and achieve a "mind blanking" state, effectively restored their focus, though they found it surprisingly difficult.

The Hacker News discussion heavily centered on comparing wall staring to existing practices, with many commenters stating it is essentially a form of mindfulness or meditation. Skepticism was expressed regarding the article's statistic on information overload, with one commenter arguing that simply viewing data, like looking at a wall, doesn't equate to processing it. Alternative solutions were proposed, including taking walks, exercising, or spending time in nature. One commenter made a distinction between meditation's deliberate focus and the act of simply "doing nothing" to rest, arguing the latter is a culturally lost concept. Other reactions included finding the title poetic and sharing anecdotes about similar instinctive behaviors for problem-solving.

4. Pgbackrest is no longer being maintained

HN discussion (387 points, 196 comments)

pgBackRest, a reliable backup and restore solution for PostgreSQL, is no longer being maintained. After 13 years as a passion project, the maintainer is discontinuing development due to unsuccessful job searches and insufficient sponsorship, which makes the project unsustainable. Current users are advised to fork the project under a new name. The project, which supported features like parallel compression, block-level backups, WAL archiving, and encryption, reached v2.58.0 as its final stable release.

HN comments express widespread disappointment, with many users praising pgBackRest's maturity and versatility for production backups. Key discussions center on finding alternatives like WAL-G, Barman, and databasus, along with concerns about the project's sustainability. Some commenters criticize the maintainer's stance on renaming forks, comparing it to "salting the ground" and arguing it hinders community continuity. Others highlight broader OSS funding challenges, noting that users often rely on critical tools without contributing financially, and suggest corporate sponsorship (e.g., CockroachDB) as a potential solution.

5. 4TB of voice samples just stolen from 40k AI contractors at Mercor

HN discussion (407 points, 154 comments)

On April 4, 2026, the extortion group Lapsus$ leaked 4TB of data from Mercor, containing voice samples paired with government-issued identity documents from 40,000 AI contractors. This unique combination provides attackers with high-quality, studio-clean recordings (averaging 2–5 minutes) and verified identities, enabling sophisticated synthetic voice cloning. The breach poses immediate risks, including bypassing bank voice verification, deepfake scams targeting employers or family, insurance fraud, and impersonation attacks. Affected individuals are advised to audit their public audio footprint, establish verbal codewords with contacts, disable/revoice biometric systems, request written verification protocols from banks, and use forensic tools like ORAVYS to analyze suspicious audio.

The Hacker News community emphasized the severity of the breach, criticizing Mercor for negligent data collection and warning that biometrics cannot be "rotated" like passwords, making them permanent liabilities. Commenters highlighted the dangerous precedent of merging voice and ID data, noting that companies hoard such information unnecessarily ("Datensparsamkeit" was cited as a key principle). Skepticism surrounded proposed solutions, particularly verbal codewords for financial contacts, due to impracticality in large-scale operations. Dark humor emerged regarding "rotating voices," while broader concerns were raised about systemic failures in corporate data handling and the inevitability of similar breaches affecting far larger datasets (e.g., from banks or tech giants). Some users questioned whether the data was "stolen" or intentionally shared, reinforcing distrust in AI training platforms.

6. China blocks Meta's acquisition of AI startup Manus

HN discussion (258 points, 154 comments)

China's state planner has ordered Meta to unwind its $2 billion acquisition of Manus, a Singapore-based AI startup with Chinese origins. The National Development and Reform Commission cited compliance with laws and regulations as the reason for blocking the deal, which had already attracted scrutiny from both China and the U.S. The startup, which relocated from China to Singapore—a practice known as "Singapore-washing" to evade regulatory scrutiny—had achieved significant growth, including $100 million in annual recurring revenue in eight months. Meta has maintained that the transaction complied with applicable law, while Chinese officials emphasized the importance of mutual benefit in such matters.

The HN discussion focused on the geopolitical implications and enforcement challenges of China's decision. Many commenters noted the irony of China blocking a deal involving a Singapore-based company, suggesting it undermines the "Singapore-washing" tactic and sets a precedent for controlling Chinese-origin technology abroad. There was debate over China's leverage to enforce the demand, with some questioning how Beijing could compel Meta, which has no significant operations in China, to comply. Others framed the move as tit-for-tat retaliation against U.S. export controls, while some criticized Meta for underestimating China's resolve to protect its AI advancements and strategic talent.

7. “Why not just use Lean?”

HN discussion (243 points, 159 comments)

The article argues against the perceived "cultism" around the Lean proof assistant, emphasizing that formalized mathematics predates Lean by decades. It traces the history of formalization back to AUTOMATH in 1968 and credits other systems like Boyer-Moore's computational logic, LCF, and HOL Light for foundational work. The author critiques Lean's dominance, noting its success stems largely from pragmatic choices like abandoning constructive proofs and building a library for advanced mathematical structures, not inherent superiority. The piece defends alternative systems like Isabelle/HOL for its automation, legibility, and simpler approach to dependent types, while also highlighting Robin Milner's LCF method of using abstract data types instead of explicit proof objects as an efficient alternative to modern type-theoretic approaches.

The HN discussion largely agrees with the article's call to avoid groupthink and consider alternatives to Lean, with one commenter praising the post for encouraging a wider look at the landscape. A significant portion of the debate centers on Lean's perceived strengths and weaknesses: some defend it as a pragmatic and powerful tool with a massive, active community, comparing its rise to "worse is better" dynamics where popularity trumps technical perfection. Others critique the article's understanding of Lean, pointing out that it also optimizes proof objects away like LCF. The conversation also explores Lean's appeal to programmers, its clunkiness compared to Agda or Coq, the role of its Mathlib library, and even humorous asides about the name "Lean" and a proposed alternative called "Purple Drank."

8. Show HN: OSS Agent I built topped the TerminalBench on Gemini-3-flash-preview

HN discussion (282 points, 110 comments)

Dirac, an open-source coding agent, has topped the TerminalBench 2.0 leaderboard on the Gemini-3-flash-preview model with a score of 65.2%, outperforming Google's official baseline (47.6%) and the closed-source Junie CLI (64.3%). The agent is designed to optimize for accuracy and cost-efficiency by tightly curating context, thereby reducing API costs by an average of 64.8% while improving performance. Key features include hash-anchored parallel edits, AST manipulation for precise code changes, and multi-file batching to reduce latency. Dirac is available as a VS Code extension and a CLI tool, is licensed under Apache 2.0, and is a fork of the Cline project.

The HN discussion focused on clarifying Dirac's architecture, its applicability across different models and languages, and its performance relative to other tools. Commenters highlighted Dirac's technical innovations, such as its optimized hash-anchored edits and use of AST for context curation, which are seen as significant improvements over traditional methods. There was also skepticism about the benchmark's generalizability, with users requesting tests on other models like Qwen or Minimax to ensure the results aren't specific to Gemini-3-flash-preview. The conversation touched on practical use cases, with some users reporting success with Rust refactoring and others questioning the necessity of a new harness over extending existing tools like Pi. Overall, the community recognized the importance of the harness's design but called for more comprehensive benchmarks and clearer documentation about supported features and model compatibility.

9. Networking changes coming in macOS 27

HN discussion (176 points, 156 comments)

The article details two significant networking changes coming in macOS 27: the removal of AFP (Apple Filing Protocol) support and stricter TLS requirements. AFP, deprecated since OS X 10.9, will likely be discontinued, affecting users with Time Capsules or legacy NAS systems that lack SMB3 support. Apple Silicon Mac users upgrading to macOS 27 will lose AFP compatibility unless they replace their network storage. Additionally, Apple will enforce TLS 1.2+ with compliant ciphers and certificates for servers handling MDM, app distribution, and updates. Users must audit connections via logging tools like `sysdiagnose` and a provided terminal predicate. The timeline notes macOS 27's beta release in June 2026 and public release in September 2026.

HN users expressed concern over Time Capsule obsolescence, noting Apple discontinued hardware in 2013 but users rely on its Time Machine functionality. Alternatives like Ubiquiti NAS were praised for seamless integration. Many criticized macOS SMB as "slow and buggy," questioning Apple's commitment to non-revenue-generating features. Technical frustration centered on the complexity of TLS compliance audits, requiring manual terminal commands despite GUI expectations. Historical context highlighted past networking issues (e.g., mDNSResponder reversal), while humorous comments included one user joking about AFP support for a Macintosh 512ke. Some argued the TLS change is overdue, while others lamented Apple’s trend of abandoning local features for cloud services.

10. US Supreme Court reviews police use of cell location data

HN discussion (203 points, 124 comments)

Unable to fetch article: HTTP 403

The Hacker News discussion on the Supreme Court's review of police use of cell location data reflects a deep skepticism about law enforcement's use of geofencing warrants and a broader concern for digital privacy. Commenters argue that police will not relinquish this capability regardless of the Court's decision and predict that Justice Kavanaugh's vote is predetermined. The debate centers on the fourth amendment, with some equating geofencing data to public surveillance, such as license plate readers or CCTV, while others counter that it is a protected password-protected asset. Reactions highlight tension between law enforcement needs and privacy rights. Some commenters argue that location data is analogous to public records and should be accessible with a warrant, while others call for strict guardrails on its use to prevent abuse. There is also significant discussion about Google's policy changes to stop storing location data on its servers and the concern that other companies and carriers may still provide this information to the government.


Generated with hn-summaries