- cross-posted to:
- cybersecurity@infosec.pub
- cross-posted to:
- cybersecurity@infosec.pub
I wrote this book without AI. Every chapter, every argument, every edit. Four months, a notebook, and no tool that would tell me my drafts were compelling.
The most predictable version of this project: run the manuscript through ChatGPT, get told each section is “well-structured and insightful,” ship it. A book about AI sycophancy, produced with the help of a sycophantic AI. The irony would have been neat. The argument less so.
Anyone who’s tried it knows what happens. You get agreement. You get polish. You get “great point – you might also want to consider…” followed by something you already said, reworded. Occasionally it suggests adding an executive summary. The book has no executive.
What you don’t get is resistance. Not real resistance. Building a sustained argument requires something pushing back – saying a thing, testing it, finding where it breaks, deciding whether the break invalidates the claim or sharpens it. That process needs a counterforce. AI is not a counterforce. It’s a mirror with better vocabulary than you.
The book is called “Looks Good to Me.”
That’s what the mirror says.

