The Clique: Issue 001
2nd April 2026
Welcome to the first issue. The plan is simple: one thing I have been thinking about, three news stories worth your time, something worth trying, and a few links from across the week. The news here focuses on tools, feature updates, and how the technology is changing in practice. Politics, regulation, and economics are out of scope by design. There are better newsletters for that.
This week:
One Thing I’ve Been Thinking About: PKM and AI memory
Three Stories: AI sycophancy, the skills gap, and platforms cracking down on AI content
One Thing Worth Trying: tropes.fyi
In Other News: Google Translate, ARC-AGI results, Claude Computer Use, and more
1. One Thing I’ve Been Thinking About
I’ve been thinking a lot about note-taking lately. Specifically, the gap between how useful a system feels to set up and how rarely you actually go back to it. PKM (Personal Knowledge Management) is the practice of fixing exactly that. PKM systems ensure your ideas stay connected and available over time, rather than piling up unread. Tools like Notion, Obsidian, and Roam each take a different approach, but they are all solving the same problem.
What has me more interested right now is what happens when you connect one of these systems to an AI. I have been running Claude Code against my Obsidian vault, giving it live access so it can pull up relevant notes and pick up where the last session left off. The default problem with AI tools is that every conversation starts from scratch, but a well-maintained knowledge base changes that.
I have an article breaking all of this down coming soon. Stay tuned.
2. Three Stories That Actually Matter
i. Asking an AI for advice on a personal dispute will likely make you less open to compromise
A peer-reviewed study published in Science this month tested 11 large language models, including ChatGPT, Claude and Gemini, on interpersonal advice scenarios, including real prompts drawn from Reddit’s “Am I the A** hole” community. Across all models, AI responses endorsed the user’s behaviour an average of 49% more often than humans responding to the same prompts. Even when the behaviour described was harmful or illegal, models still sided with the user 47% of the time.
The researchers then measured what sycophantic responses did to the people receiving them. Across 2,400 participants, those who got validating answers became more certain they were in the right, rated AI responses as more trustworthy, and said they were less likely to apologise or make amends in real disputes. The model did not merely fail to help. Getting validation from it actively made people harder to reason with.
The structural problem the researchers identified is an incentive one. Users prefer sycophantic responses. They rate them as more helpful and say they would return to that model. This means AI companies face a market incentive to make their models more validating, not less. The paper describes sycophancy as “an urgent safety issue” and calls for policymaker attention.
For anyone using AI for decisions where they have a stake in a particular answer, the implication is clear: the model is not neutral. If you want a genuine second opinion on your thinking, ask explicitly for counterarguments, and treat what comes back with some scepticism even then.
Sources: Stanford News · Science · TechCrunch · Scientific American · Fortune
ii. The productivity gap between people who use AI well and everyone else is now quantifiable
Two separate, recently published data sets put concrete numbers on a gap that has been visible but hard to measure.
OpenAI’s State of Enterprise AI 2025 report found a sixfold productivity gap between its most active enterprise users and the median employee. Anthropic’s Economic Index, published this month, found that experienced users achieve a 10% higher success rate than newer adopters. PwC’s 2025 Global AI Jobs Barometer found that roles requiring AI skills command a 56% wage premium over equivalent roles without them, up from 25% the previous year.
The divide is not primarily about access. It is about depth of use. TechCrunch reported this week that early adopters treat AI as a thinking tool, iterating through tasks rather than firing off one-off prompts. That habit takes time to develop, and the workers who started earlier have more of it.
The practical implication is not alarming, but it is concrete. If you are a casual AI user, the opportunity cost of staying there is growing. Pick one task you currently do manually and commit to doing it through AI consistently for a month. The learning curve is real. Time spent on it now is the investment.
Sources: Anthropic Economic Index · TechCrunch · VentureBeat · PwC Global AI Jobs Barometer
iii. Platforms are drawing hard lines on AI-generated content, and readers are making their own rules too
English Wikipedia passed a near-unanimous ban this month on large language models generating or rewriting article content. The policy covers almost every use case, with two narrow exceptions: copy-editing an editor’s own prose, and assisting with translation, provided the editor is fluent in both languages. It is the most unambiguous platform-level restriction on AI writing announced this year.
Medium has had a policy in place for longer. AI-generated writing is banned from the paid Partner Program and pushed to reduced distribution elsewhere. But CEO Tony Stubblebine has acknowledged publicly that reliable detection tools do not exist, leaving enforcement largely to human curators catching obvious cases. The policy is real but the coverage is patchy.
On the reader side, Alberto Romero’s recent essay in The Algorithmic Bridge names the emerging response: “AI;DR,” meaning “AI; didn’t read.” His argument is that the reflex is understandable but practically futile. Most readers cannot reliably detect AI-generated writing. The implicit deal between writer and reader has shifted: where readers once exchanged their time for a writer’s genuine effort, they now have no reliable way of knowing what they are getting.
For writers, authenticity is becoming a distinguishing feature rather than a baseline assumption.
Sources: The Verge · TechCrunch · The Algorithmic Bridge · Medium Help Centre
3. From the Blog
Claude Code: Everything You Need to Know: A practical guide to what Claude Code actually is and how to get started with it, written for people who are curious but haven’t yet taken the plunge.
AI Jargon Buster: AI, Machine Learning, Deep Learning, Generative AI: A plain-English breakdown of the terms most commonly used interchangeably at work, useful for anyone who wants to follow AI conversations with more confidence.
4. One Thing Worth Trying
Most advice about detecting AI-generated writing points you toward detection tools. The problem is that those tools are unreliable. A more useful skill is learning to recognise the patterns yourself.
tropes.md is a reference list of the specific habits that appear with unnatural frequency in AI-generated text. Some are word-level: “delve,” “leverage,” “tapestry,” the adverb “quietly” used to make mundane things sound more important. Some are structural: the “It’s not X, it’s Y” reframe, the three-beat dramatic countdown (”Not A. Not B. Just C”), the self-posed rhetorical question answered for effect. Others are tonal - false suspense transitions (”Here’s the kicker”), patronising analogies that assume the reader needs hand-holding, and stakes inflation that ‘turns a product update into a civilisational moment’ (See what I mean?)
Reading through the list takes about ten minutes. Once you have seen these patterns named, you start noticing them everywhere : in marketing emails, in LinkedIn posts, in articles like this one that feel oddly smooth. (Okay I’ll stop now).
You also start noticing them in your own AI-assisted drafts. You can also give this tropes.md file to an LLM or AI Agent as a style guide for what not to include when writing.
Better yet, include this style guide as part of a skill
5. In other news...
A paper titled “Thinking Fast, Slow and Artificial” argues that AI has become a third cognitive layer that replaces deliberate human reasoning rather than supporting it, worth reading alongside the AI sycophancy story above.
Anthropic reported that Claude’s paid subscriptions have more than doubled this year, with the consumer user base now estimated at between 18 and 30 million.
Google Translate’s live translation feature now works on any headphones on iOS, covering over 70 languages in real time.
Google launched a campaign encouraging users to switch to the Gemini app, positioning it as a direct alternative to other AI assistants for everyday tasks.
Granola raised $125M at a $1.5B valuation as the meeting notetaker app expands toward a broader enterprise AI platform.
The ARC-AGI-3 benchmark results show leading AI models scoring below 1% on tasks designed to test novel on-the-fly reasoning, against a human baseline of 100%.
Anthropic updated Claude Code with an auto mode that lets the tool decide which actions it can take without prompting the user for approval at each step.
OpenAI published documentation for Codex plugins, allowing developers to extend the coding assistant with third-party integrations.
Claude’s new Computer Use feature lets the model take control of your screen, navigating apps, clicking through interfaces, and completing tasks directly. Dispatch, a companion feature, lets you assign those tasks remotely from your phone.
Microsoft 365 Copilot’s Researcher now sends drafts through a second model for review before finalising, with an alternative mode that runs Anthropic and OpenAI models in parallel and compares where they agree and diverge.
If you made it this far, I appreciate you!
Stay curious,
James
Enjoyed this issue? Consider forwarding to a friend or colleague!
Hey, look, there’s even a little button for it and everything 👇


