VOL. 01 — INDEPENDENT TOOL INTELLIGENCE · FOR FOUNDERS & SOLOPRENEURS
The Blog · Updated Weekly

AI tools, actually tested.

Hands-on writing on AI tools, prompting tactics, no-code app building, and the solopreneur tech stack — written for founders who'd rather learn from someone who shipped than read another AI-generated listicle.

Latest posts.

Four deep dives this month — every word written by a human, every tool tested with our own credit card, every recommendation backed by something we actually shipped.

The solo founder stack
Tool Stack · 14 min read · Apr 22, 2026

The Best AI Tools for Solo Founders in 2026 (Hands-On Tested)

The exact stack we'd use to start an internet business from scratch in 2026. Twelve AI tools tested across four months and three real projects, with honest verdicts on which ones earned a permanent slot.

Read the post →
ChatGPT vs Claude
AI Comparison · 11 min read · Apr 19, 2026

ChatGPT vs Claude in 2026: Which AI Should Founders Actually Pay For?

Both are $20/month. Both claim to be the best. We ran the same six founder workflows through each one — coding, writing, research, planning, customer support, and SEO drafting — and tracked exactly where each one wins.

Read the post →
SaaS in 72 hours, no code
Build Guide · 13 min read · Apr 15, 2026

How to Build a SaaS Without Code in 72 Hours (Real Walkthrough)

A step-by-step build log of an actual SaaS product we shipped without touching code. The tools, the prompts, the dead-ends, and the working app we have running today — including the parts where AI failed and we had to find workarounds.

Read the post →
7 prompting tactics
AI Tips & Tricks · 10 min read · Apr 10, 2026

7 AI Prompting Tactics That Save Solo Founders 10 Hours a Week

Forget "act as an expert" prompt templates. These are the prompting patterns we actually use — tested across hundreds of real workflows, with side-by-side examples showing what generic prompts produce versus what these tactics produce.

Read the post →

The Best AI Tools for Solo Founders in 2026 (Hands-On Tested)

Twelve tools tested across four months and three real projects. Most underdelivered. Five earned a permanent slot in the stack. Here's what's worth paying for in 2026 — and what to skip even when it's free.

The Solo Founder AI Stack — 2026 Five tools that earned their place. Seven that didn't.

Every list of "best AI tools for solo founders" I've read in the last twelve months has the same problem: nobody actually used the tools. They scraped product pages, paraphrased the marketing copy, and shipped a 3,000-word SEO listicle that recommends every product on the page. That's not a review — it's a referral funnel with a thin coating of editorial paint.

This is different. Over the last four months I subscribed to twelve AI tools — on a personal credit card, monthly billing, no comp accounts — and ran them through three real projects: a paid newsletter (this one), a no-code SaaS prototype, and a content site that ranked on page one of Google in three weeks. Most of the tools didn't survive contact with the work. The ones that did are below.

The 5 AI tools every solo founder should pay for in 2026

I'm going to skip the "honorable mentions" section that most listicles use to pad their word count. These are the five tools I'd put on a fresh credit card tomorrow if I had to start over. Everything else either failed in testing or duplicates what these five already do.

1. Claude Pro — the thinking partner ($20/month)

Claude is the AI I open first when the work involves judgment: structuring an argument, designing a system, writing copy under my own name, debugging code I'd be embarrassed to ship broken. The output reads less like a press release and more like something a sharp colleague would say. The 200K token context window means it can hold an entire codebase or a 500-page PDF in working memory, which changes how you can use it. Claude Code — the terminal coding agent — is included free with Claude Pro and has replaced roughly 70% of what I previously paid Cursor to do.

2. ChatGPT Plus — the execution engine ($20/month)

ChatGPT is the AI I open when the work involves doing something fast: image generation with DALL-E, voice mode for working out loud while walking, the GPT marketplace for niche tasks, real-time web search when I need a fact verified before publishing. It's the more versatile of the two, and the ecosystem advantage is real. Pay for both Claude and ChatGPT — they're complementary, not interchangeable, and at $40/month combined they replace tools that previously cost me $400/month.

3. Beehiiv — the newsletter platform ($0–$84/month)

The most affiliate-friendly newsletter platform we've tested, with deliverability that measurably beats Substack and ConvertKit. The Boost network — paid recommendations between newsletters — is built into the editor as a first-class block. Free up to 2,500 subscribers, no trial wall. This is where serious newsletter operators are migrating in 2026. See our full Beehiiv review.

4. Emergent.sh — the AI app builder ($0–$20/month)

The first AI app builder we've tested where "full-stack" actually means full-stack. Auth, database, Stripe, hosting — all handled. We built a working SaaS prototype in 72 hours, which we'll walk through in the build guide below. If you're a non-technical founder who's been trying to get an MVP shipped for six months, this is the unblock. See our full Emergent.sh review.

5. Surfer SEO — the content compass ($89+/month)

Best-in-class on-page SEO scoring. The Content Editor tells you exactly what terms and entities the top-ranking pages are using and shows you a real-time score on your draft. We hit page one in three weeks on a fresh domain — no link building, no PR — using nothing but Surfer-optimized articles. Worth the price if you're publishing weekly. Don't buy it if you publish monthly.

Pay for both Claude and ChatGPT — they're complementary, not interchangeable. At $40/month combined they replace tools that used to cost $400/month.
Stack Cost ~$130/month for the entire solo founder AI stack

The 7 AI tools we tested and dropped

The skip list is more useful than the recommend list, because the cost of buying the wrong tool isn't just the subscription — it's the weeks you spent learning it, the data you migrated, and the workflow you built around it before realizing it wasn't worth keeping.

Notion AI ($10/user/month, now usage-based)

Fine for keeping AI inside Notion's wiki. Underwhelming for actual work compared to Claude or ChatGPT. The recent shift to usage-based pricing means heavy users will pay more for less. Skip it unless you live in Notion all day.

Jasper AI ($49+/month)

Was best-in-class three years ago. Now it's a B+ tool in an A-tier market. ChatGPT Plus + custom GPTs beats it for $20/month. The category is collapsing — Jasper recently sunset its Boss Mode tier and refunded annual subscribers. Read that as a signal.

Copy.ai ($49+/month)

Same problem as Jasper. Templates can't compete with general-purpose AI in 2026. The output is consistent but flat — fine for SEO content farms, useless for anything you'd publish under your own name.

Lovable ($25+/month)

Excellent for static landing pages and marketing sites. Falls over the moment you need a real backend, custom data models, or anything that isn't a CRUD app. Better than the no-code generation before it, but Emergent.sh is a generation ahead for actual product building.

Pictory and similar AI video tools

If you need talking-head video, you need a person. The output from AI video tools in 2026 is recognizable as AI within three seconds. The valid use case is short B-roll for existing content. The marketing pitch — "replace your video team" — is fiction.

Most "AI sales" tools

Tools promising autonomous lead generation and outbound sequences are mostly spam-machines that get your domain blocked. We tested four. None outperformed a well-crafted manual outreach process. The category is too immature.

Most AI-powered productivity bundles

Tools claiming to "replace your entire workflow with AI" are trying to do too much, and they're worse at every individual thing than the dedicated tool. Pay for specialists. Skip the suites.

What changed in 2026 that the lists from 2024 missed

If you're reading older "best AI tools" listicles, three structural shifts have made most of their recommendations obsolete:

1. The mid-tier AI writer category collapsed. ChatGPT Plus and Claude Pro at $20/month each now do what Jasper, Copy.ai, and Writesonic used to charge $49+ for. The dedicated AI writer category exists today only for content farms and high-volume agencies. For everyone else, it's strictly worse than general-purpose AI.

2. Coding agents went mainstream. Claude Code and similar terminal-based agents are now bundled with consumer Pro plans. The standalone "AI for developers" subscription category — Cursor, Codeium, etc — is being squeezed. Cursor still wins on UX for full-time developers, but the value gap to Claude Code is shrinking weekly.

3. Full-stack AI app builders got real. The 2024 generation of no-code AI tools were front-end toys. The 2026 generation — Emergent.sh, the latest Lovable, Bolt — actually ship working backends. Non-technical founders who've been waiting for the MVP unblock can stop waiting.

How to evaluate any AI tool before subscribing

The single highest-ROI hour you'll spend before adopting a new tool is the hour you spend running your real workflow on the free tier. Toy projects lie. The friction you'll feel on day 30 is the friction you should test for on day 1.

The other three checks I run on every tool before paying:

  • Cancellation flow speed. If I can't find the cancel button in 60 seconds, the company is hostile to its customers and the product won't get better over time.
  • Changelog velocity. A public changelog with updates in the last 30 days is the best single signal of product health. Stale changelogs (3+ months silent) are obituaries.
  • Pricing math at 10x usage. The right tool is rarely the one that wins at today's usage — it's the one that doesn't bankrupt you when you grow.
The honest take

The best AI stack for solo founders in 2026 is smaller and cheaper than the lists suggest. Five tools, ~$130 a month, and you have what cost agencies $5K/month to deliver in 2022. Resist the urge to buy more. Tool sprawl is the silent killer of solo-founder productivity.

Want the weekly digest?

One rigorously tested review, one comparative breakdown, one signal from inside the SaaS world. Every Sunday.

Subscribe Free →

ChatGPT vs Claude in 2026: Which AI Should Founders Actually Pay For?

Both are $20/month. Both claim to be the best. We ran the same six founder workflows through each one and tracked exactly where each one wins — and where each one quietly loses.

ChatGPT vs Claude · Tested 2026 Two tools, six workflows, one honest verdict.

Here's the conclusion most ChatGPT vs Claude comparisons bury under 4,000 words of benchmarks: they're not interchangeable, and you probably want both. At $20/month each, you're spending $40 to get capabilities that did not exist at any price three years ago. The real question isn't "which one to buy" — it's "which tasks should I route to which one?"

This guide answers that. We ran six common founder workflows — coding, long-form writing, research synthesis, customer support copy, strategic planning, and SEO content drafting — through both ChatGPT Plus (GPT-5.4 / 5.5) and Claude Pro (Opus 4.6 / Sonnet 4.6) over six weeks. We logged the results, the friction, and the moments where one tool just outperformed the other.

The TL;DR (because you're busy)

If you only read one paragraph: Claude wins on writing, coding, and reasoning. ChatGPT wins on speed, ecosystem breadth, image generation, voice mode, and real-time web search. The benchmark gap on raw capability has narrowed to single digits. The practical gap on specific workflows is much bigger than the benchmarks suggest.

Buy ChatGPT Plus if:

  • You need image generation (DALL-E is built in)
  • You want voice mode for hands-free brainstorming
  • You depend on real-time web search inside your AI workflow
  • You're inside the Microsoft ecosystem (Copilot integration matters)
  • Your work is broad and varied — many shallow tasks, not a few deep ones

Buy Claude Pro if:

  • You write under your own name and the prose has to sound like you
  • You code and want Claude Code (a terminal coding agent, included free)
  • You work with long documents — Claude's 200K context window is a different category
  • You do strategic or analytical work where reasoning quality matters more than speed
  • You want fewer hallucinations on hard reasoning problems

If you can swing both, do both. The combined cost is less than one mid-range SaaS subscription.

Claude is the thinking partner. ChatGPT is the execution engine. Use them like that and you'll stop arguing about which one is better.

The six workflows, scored head-to-head

Workflow Test Results 6 founder workflows · scored side by side

1. Coding (Winner: Claude — by a clear margin)

Claude wins coding. It's not close in 2026. On SWE-bench Verified — the industry standard for real-world software engineering tasks — Claude Opus 4.6 hits 80.8%; GPT-5.4 lands around 80%. The gap on benchmarks is narrow. The gap on actual code is wider, because Claude is better at the part that matters: writing code that doesn't have subtle bugs in the edge cases. We tested both with the same prompt — implement a debounce function with TypeScript types — and Claude returned cleaner, properly-generic code on the first try; ChatGPT used any in two places and required follow-up prompts to fix.

The killer feature for solo founders: Claude Code is a terminal-based coding agent included free with Claude Pro. It reads your entire codebase, edits multiple files, runs commands, uses your local git. For solo developers who used to pay $20/month for Cursor, this is potentially that money back in your stack.

2. Long-form writing (Winner: Claude — but it's closer than coding)

Claude produces more natural prose. Sentence length varies. Paragraph transitions flow. The output feels less like an AI imitation of writing and more like writing. ChatGPT is competent but recognizable — there's a particular cadence and word choice ("delve," "tapestry," "in today's competitive landscape") that gives it away within two paragraphs.

For anything you'd publish under your own name — newsletters, blog posts, pitch decks, founder updates — Claude wins. For drafts you'll heavily rewrite anyway, the gap matters less.

3. Research synthesis (Winner: ChatGPT — for breadth, Claude for depth)

If your research question requires real-time web search ("what's the current price of X," "what did Y company announce this week"), ChatGPT wins because it's actually plugged into the live internet. Claude doesn't browse the web natively.

For research questions where you have the source material already — paste in 10 PDFs, ask Claude to synthesize the patterns — Claude wins on quality of synthesis. Its ability to hold long, multi-document context and reason across all of it is genuinely the standout capability.

4. Customer support copy (Winner: ChatGPT — by a small margin)

Both produce competent customer support drafts. ChatGPT is slightly better at the empathetic-but-firm tone most support emails need, and ChatGPT's GPT marketplace has dozens of pre-built support templates that save setup time. Claude can match the quality with a good system prompt, but ChatGPT gets there faster out of the box.

5. Strategic planning (Winner: Claude — by a clear margin)

Strategic and analytical work is where Claude's reasoning quality shows up most. Asked the same complex strategic question — "how should I think about pricing for a B2B SaaS targeting solo founders" — Claude produced a better-structured answer, considered more alternatives, and was more honest about uncertainty. ChatGPT gave a faster answer that was less rigorous.

This category aligns with Claude's lead on GPQA Diamond (PhD-level reasoning), where it scores 91.3% — the widest margin of any major benchmark category.

6. SEO content drafting (Winner: ChatGPT — barely)

For SEO-driven content production at volume, ChatGPT's speed and ecosystem advantages tip the balance. The Custom GPTs marketplace has SEO-optimized writing templates pre-built. Output is consistent and fast. Claude produces better-quality prose but is slightly slower and requires more prompt engineering to hit the structural requirements that SEO needs (target word count, FAQ blocks, schema-friendly headings).

What about pricing?

Both consumer plans are $20/month. There's no cost-based decision at the consumer tier. At higher tiers:

  • Claude Max: $100+/month for higher usage limits and priority access
  • ChatGPT Pro: $200/month for the o-series reasoning models, advanced voice, and higher limits
  • API pricing: Claude Sonnet 4.6 at $3/$15 per million tokens (input/output), GPT-5.4 at $2.50/$15. Effectively a wash unless you're at significant volume.

If you're a solo founder, you almost certainly don't need either premium tier. The $20 plan from each is the right answer until proven otherwise by your actual usage hitting limits.

The model routing approach (advanced)

If you're using AI heavily across many tasks, the smartest 2026 setup isn't picking one — it's routing. I default to Claude for: writing under my name, coding, long documents, strategic work. I default to ChatGPT for: image generation, voice brainstorming, web-search-required research, fast iteration on shallow tasks.

The mental cost of switching is small. The performance gain is real. Stop arguing about which AI is "better" and start treating them like specialized tools.

The honest take

If forced to pick one for a solo founder, I'd pick Claude — the writing quality and Claude Code make it slightly more valuable for the work most founders do most often. But the right answer is to pay for both. $40/month is a rounding error in your tool stack and you'll get capability that one alone can't deliver.

More tool comparisons in your inbox.

Honest, hands-on AI tool reviews and comparisons every Sunday. No affiliate spam, no vendor promotion.

Subscribe Free →

How to Build a SaaS Without Code in 72 Hours (Real Walkthrough)

A step-by-step build log of an actual SaaS we shipped without writing a line of code. The tools, the prompts, the dead-ends, and the working app we have running today — including the parts where AI failed and we had to find workarounds.

No-Code AI App Build Log SaaS in 72 hours, $0 in dev cost.

I've been telling non-technical founders for years that the right time to build a SaaS without code was "soon, but not yet." That changed in 2026. The current generation of AI app builders — specifically Emergent.sh, the latest Lovable, and Bolt — finally ship working full-stack applications, not just pretty mockups. The "no-code SaaS" promise is real now in a way it wasn't even six months ago.

To pressure-test this, I gave myself a constraint: build a real SaaS, with real users and real Stripe payments, in one weekend. No code. No developer help. Just AI tools and a clear-headed idea. This is the build log — every prompt, every wall I hit, every workaround. If you've been waiting for the right moment to ship your idea, this might be it.

The product I built

The product was deliberately simple — a tool I'd genuinely use and could pitch in one sentence: a SaaS where freelance writers can submit articles, paying clients can review and comment, and Stripe handles per-article payment. Not a billion-dollar idea. A focused, useful product I could ship in 72 hours.

Why this matters: most no-code-SaaS guides online build a glorified to-do app and call it a day. Real SaaS has auth, payments, multi-user roles, notifications, and an admin dashboard. If the build doesn't include those, it's a tutorial, not a product.

The stack I used

Three tools. That's it.

  • Emergent.sh — for the actual app build. The one tool that handles auth, database, payments, and hosting natively. Our full review.
  • Claude Pro — for crafting prompts, debugging issues, and writing the marketing copy. The thinking partner.
  • Stripe — for payments. Native integration with Emergent.sh on the Pro plan.

Total cost for the weekend: $0 (Emergent free tier was enough for the build) + $20 for Claude Pro that I was already paying for. Stripe takes a percentage of transactions, so no upfront cost.

Hour 0–6: Spec and prompt

The single most important thing in any AI-built product is the spec. Vague spec, vague output. The first six hours weren't building — they were getting the spec right. I drafted it in Claude:

"Build a SaaS app called WriterDeck. Three user types: writers (submit articles), clients (review and pay), admins (manage both). Writers create accounts and submit articles via a markdown editor. Clients see a dashboard of submissions, can comment, request revisions, and approve. Approval triggers Stripe payment from client to platform, with platform taking a 10% fee. Writers can withdraw earned funds via Stripe Connect. Admin can see all transactions and intervene in disputes."

I asked Claude to refine the spec — to identify the parts that would be ambiguous to an AI app builder, to surface edge cases, and to suggest what to defer to V2. The refined spec was about 800 words and identified six subtle issues I hadn't thought through. (Example: what happens if a client requests a revision after paying?)

Hour 6–24: First build pass with Emergent.sh

Build Velocity ~4 hours to a working V0 with auth + database

I pasted the refined spec into Emergent.sh. About four hours later — including iterative back-and-forth — I had a functional V0. The login worked. The article submission form worked. The client dashboard showed submissions. The database had the right tables. The hosting was already live at a temporary URL.

What worked surprisingly well: the auth flow. Emergent provisioned email + password authentication, password reset, and session management without me asking explicitly. It just understood that a SaaS needs login.

What didn't work first try: the role-based permissions. Writers could see other writers' drafts. I had to explicitly prompt: "Writers should only see their own articles. Clients should see articles assigned to their organization. Admins see everything." After that prompt, it fixed itself in one iteration.

Hour 24–48: Stripe and the real-product polish

Stripe was the moment of truth. This is where most no-code tools fall apart — payment integration is hard, and most AI app builders fake it with a checkout link rather than a real product.

Emergent on the Pro plan ($20/month) genuinely integrates Stripe. I pasted my Stripe test API key, and within an hour it had wired the payment flow: clients see a "Approve and Pay" button, clicking it triggers a Stripe checkout, payment success triggers a webhook that updates the article status, and the writer's earnings increment.

The wall I hit: Stripe Connect (the part that lets writers withdraw money) was beyond what the AI builder could ship without significant hand-holding. I deferred this to V2 and shipped with manual payouts for the first month. This is a totally legitimate solo-founder move — your first ten customers don't need every feature, they need it to work.

Hour 48–72: Polish, marketing copy, and ship

The final 24 hours were the unsexy work. Email templates for password reset and notifications. Error states that didn't look like programmer humor. A landing page. Pricing page copy. An About page. None of this is hard, but all of it takes time.

I wrote the marketing copy in Claude — three iterations to get the tone right. Pasted into Emergent. Adjusted the CSS variables to make it not look like every other AI-generated app. Connected my custom domain. Tested the entire flow end-to-end with a real Stripe test card.

Hour 71. The app worked. I tweeted it. Three sign-ups in the first hour. The first paid transaction came through within a day.

The 2024 generation of "no-code AI" tools were front-end toys. The 2026 generation actually ship working backends with auth, databases, and payments. The unblock for non-technical founders is real now.

What I'd do differently

Spec longer, build faster. The hours I "lost" on the spec saved me roughly 12 hours of debugging during the build. Don't skip this. Use Claude to refine the spec until it's tight.

Defer aggressively. The Stripe Connect deferral was the right call. Your V1 should ship with the smallest possible feature set that solves the core problem. Everything else is V2.

Don't trust AI on edge cases. The role-based permissions issue could have been a security problem in production. Always test the parts where users with different roles interact. AI builders default to permissive — you have to be explicit about what's locked down.

Buy Pro early. The free tier of every AI app builder is generous enough to validate the idea, but you'll burn through credits fast once you're seriously building. $20 for the Pro plan saved me roughly four hours of waiting on credit refreshes.

The honest limitations

I want to be straight about what AI app builders can't do well in 2026, even after the recent generation leap:

  • Complex schemas. The build slows noticeably once your data model has 30+ tables. Stay simple in V1.
  • Custom integrations. Native integrations (Stripe, Auth providers, common DBs) are great. Custom third-party APIs require a developer.
  • Performance at scale. The generated code prioritizes correctness over performance. Once you're past 10K users, you'll likely need to rewrite parts manually.
  • Custom UI beyond templates. The output looks good in a "modern SaaS" sense, but if your design vision is unusual, you'll fight the tool.

None of these are dealbreakers for V1. All of them become real once you're building V3 or V4. AI app builders are a phenomenal launch tool — they're not yet a phenomenal scale tool. Plan for that.

The honest take

Build the V1 with AI. Hire a developer for V3. The 2026 generation of no-code AI app builders compresses the timeline from "idea to first paying customer" from months to days. That's the unblock you needed. Ship V1 fast, validate the idea with real money, then bring in a developer to take it to scale once you've proven there's a there there.

Get the build guides in your inbox.

Real SaaS build walkthroughs, AI tool reviews, and the prompting tactics that actually work. Every Sunday.

Subscribe Free →

7 AI Prompting Tactics That Save Solo Founders 10 Hours a Week

Forget "act as an expert" prompt templates. These are the prompting patterns I actually use — tested across hundreds of real workflows, with side-by-side examples showing what generic prompts produce versus what these tactics produce.

7 Prompting Patterns That Actually Work Skip "act as an expert." Try these instead.

The "best AI prompts" lists circulating in 2026 are mostly garbage. They're either "act as a [profession] and help me [verb]" templates that produce generic output, or hyper-specific 800-word prompt scripts that work for one task and break for everything else. Neither of those approaches survives contact with real founder work.

What follows is different. These are seven prompting patterns — repeatable structures, not specific scripts — that I use daily across Claude and ChatGPT. Each one is paired with a side-by-side: a generic prompt and the better version, with notes on why the better version works. Steal them, modify them, build your own.

1. The "Counterargument" pattern

Default AI behavior is to agree with you. Ask it to evaluate an idea and it'll find merit in everything. This is exactly what you don't want when making decisions. The fix is forcing the AI to argue against the position before assessing it.

Generic prompt: "Should I launch this product on Product Hunt?"

The pattern: "Steelman the case AGAINST launching this product on Product Hunt — give me the strongest version of why this is a bad idea. Then steelman the case for. Then give your honest assessment, weighing both."

This works because the AI is forced to articulate the counterargument first, which means it can't lean on confirmation bias. The final assessment is meaningfully more grounded. I use this pattern for every non-trivial decision.

2. The "Constraint" pattern

AI loves to give you a complete answer. Sometimes a complete answer is the wrong answer — what you need is the best answer given a specific constraint. Forcing the constraint up front changes the output dramatically.

Generic prompt: "Help me write an email to a customer who's complaining."

The pattern: "Write a 60-word email to a complaining customer. The constraint: no apology in the first sentence. Open with what you're going to do for them, not with how sorry you are."

The constraint forces the AI to skip the boilerplate opening that almost every "customer service email" template defaults to. The output reads like a real human wrote it, not like a customer success template.

3. The "Specificity Ladder" pattern

Prompt Quality Generic prompts produce generic output. Specificity wins.

The single highest-leverage thing you can do to improve AI output is add specificity. Generic prompts produce generic output, every time. The "specificity ladder" pattern is how I systematically push toward a better prompt.

Generic: "Write a tweet about my product."

One rung up: "Write a tweet about my product, a no-code SaaS builder for non-technical founders."

Two rungs up: "Write a tweet about my product, a no-code SaaS builder for non-technical founders. The audience is on tech Twitter, slightly skeptical of AI hype, and has seen 10 of these launch already this month."

Three rungs up: "Write three different tweets in the voice of [insert account I admire] about my product. Audience: tech Twitter, AI-fatigued. Constraint: no emojis, no thread, single tweet, no 'excited to share.' Lead with a specific number or claim that makes me click."

Each rung produces materially better output than the one below. The work isn't in the writing — it's in noticing how vague your prompt is and adding the missing specificity.

4. The "Pre-Mortem" pattern

Pre-mortem analysis — "imagine this project failed; what went wrong?" — is one of the most underrated decision-making frameworks in business. It's also a phenomenal AI prompt because it forces the AI to enumerate failure modes you wouldn't have thought to ask about.

Generic prompt: "Help me think through this product launch."

The pattern: "It's six months from now and this product launch has failed badly. Walk me through the most likely reasons — not the dramatic ones, the boring ones that founders actually trip on. List 8."

The "boring ones" qualifier is critical. Without it, the AI gives you "you ran out of money" and "the market shifted." With it, you get "your onboarding flow had three steps too many," "your pricing page didn't surface the most popular tier first," "the cancellation rate spiked because you didn't email Day 30 with re-engagement." These are actionable.

5. The "Two-Pass" pattern

AI tends to optimize for sounding good, not for being right. The two-pass pattern separates draft from criticism, which forces the AI to evaluate its own first answer instead of polishing it.

Generic prompt: "Write a sales page for my product."

The pattern (Pass 1): "Write a 500-word sales page for my product. Don't worry about polish; focus on the structural argument."

The pattern (Pass 2): "Now critique the sales page you just wrote. What are its three biggest weaknesses? Don't be diplomatic — be a hostile reviewer who's seen 500 of these and is sick of all of them."

The pattern (Pass 3): "Rewrite the sales page addressing those three weaknesses. The goal isn't to remove the weak parts — it's to use them as opportunities to strengthen the argument."

The output of Pass 3 is almost always significantly better than what you'd get from a single comprehensive prompt. Iterating against criticism beats iterating against perfection.

6. The "Show, Don't Tell" pattern

If you want output in a specific style or format, don't describe the style — show an example. AI is dramatically better at pattern-matching against examples than parsing style descriptions.

Generic prompt: "Write the email in a casual but professional tone, with short paragraphs, conversational, but informative."

The pattern: "Write the email in this voice: [paste a real email you wrote]. Match the sentence rhythm, the way I open and close, the specific words I use. Subject of this email is [X]."

The improvement is dramatic. AI can mimic a writing voice with one good example far better than it can interpret abstract style instructions. This pattern is also how you make AI output sound like *you* across every channel.

7. The "Roleplay the Audience" pattern

Most "write copy" AI failures aren't because the AI is bad at writing — they're because the AI doesn't know who's reading. The roleplay pattern fixes this by having the AI inhabit the audience first, then write to them.

Generic prompt: "Write a landing page for my product."

The pattern (step 1): "Roleplay as a 35-year-old solo founder who's spent the last 8 months trying to find the right newsletter platform. They've tested Substack and ConvertKit and been frustrated. Describe their day-to-day frustrations and what would make them pull out a credit card."

The pattern (step 2): "Now write a landing page for [my product] that speaks directly to that person's specific frustrations. Lead with what they were just complaining about."

You'll be surprised at how much sharper the copy gets. The AI is essentially writing from a clear customer persona it just generated, which is exactly the brief most agencies struggle to produce in a week.

Generic prompts produce generic output. Every prompting tactic above is fundamentally about removing genericness — through constraint, specificity, role, or example.

The meta-tactic: combine them

The patterns aren't mutually exclusive. The prompts I use for serious work usually combine three or four. A complete prompt for a sales page might include:

  1. Roleplay the audience first (#7)
  2. Show a real example of the voice I want (#6)
  3. Add specific constraints (#2)
  4. Two-pass it for critique and rewrite (#5)

That's a 4-pattern stack on a single piece of output. The result is dramatically better than what any single template would produce, and the work is mostly in noticing what's missing rather than learning new commands.

The 10-hour-a-week claim

I want to back up the headline. The hours saved come from three places:

  • ~3 hours/week saved on first drafts — better prompts mean less rewriting, which is the slowest part of writing.
  • ~2 hours/week saved on decisions — the counterargument and pre-mortem patterns surface considerations faster than thinking alone.
  • ~5 hours/week saved on repetitive tasks — once you've built a 4-pattern prompt for a recurring task (your weekly newsletter, your customer support replies, your sales page A/B variants), reusing it saves the full setup cost every time.

Your numbers will vary. The shape of the savings won't.

The honest take

Stop searching for "the perfect AI prompt." It doesn't exist, and you can't reuse it anyway because every situation is slightly different. Learn three or four patterns and combine them fluently. That's the actual prompting skill. Everyone arguing about specific prompt templates online is one level too low.

More AI tactics in your inbox.

The prompting patterns, tool reviews, and tactics we'd actually use ourselves. Every Sunday, free.

Subscribe Free →