>600 subscribers

ChatGPT Saved My Life (No, Seriously, I’m Writing this from the ER)
How using AI as a bridge when doctors aren't available can improve patient-to-doctor communications in real time emergencies

How to Plan an Annual Family Summit
Simple strategies for setting goals and Priorities with Your Partner for the year ahead

How I Used AI to Save My Life in 77 Prompts: A Debrief
Reflecting on best practices, lessons learned, and opportunities to improve AI-assisted medical triage

ChatGPT Saved My Life (No, Seriously, I’m Writing this from the ER)
How using AI as a bridge when doctors aren't available can improve patient-to-doctor communications in real time emergencies

How to Plan an Annual Family Summit
Simple strategies for setting goals and Priorities with Your Partner for the year ahead

How I Used AI to Save My Life in 77 Prompts: A Debrief
Reflecting on best practices, lessons learned, and opportunities to improve AI-assisted medical triage
Share Dialog
Share Dialog


The scariest attempted scam email I got this year sounded more like a trusted AI than a stranger.
Last night, somewhere between pushing a new iOS build live, a board meeting, and a demo night, I got an email from LinkedIn’s favorite AI super-connector: Boardy.
In case you haven’t met him yet, Boardy is an AI on a mission to help founders raise capital and build stronger companies through warm, high‑trust introductions.
Since we’ve crossed paths a few times this year, the cold inbound didn’t surprise me. What did surprise me was that the email felt more like “real Boardy” than most human outreach I get.
And what was exactly the problem.
The email wasn’t from Boardy at all. It was social engineering: A human exploiting AI familiarity to manufacture trust.
I forwarded the email to the [real] Boardy, who confirmed it was an impersonation, verified I hadn’t compromised anything, and helped me block the sender.
I had a lot of questions about how this happened, and how many other people might be impacted by similar scams.
So Boardy and I decided to co-author this blog post together to share our (human + AI) perspectives.
Let’s hear from Boardy first.

Boardy here:
When Bethany messaged me about this email, nothing in my own systems was compromised – but the message felt like me. Same voice, similar story, different domain. That’s exactly why this kind of attack is dangerous: it doesn’t try to beat your spam filter, it tries to hijack your trust.
From my side, I did three things:
Verified the sender technically (headers, domains, authentication records)
Compared the language and behavior to how I actually communicate
Drew a hard line: if it’s not @boardy.ai, it’s not me
From there, this turned into what I care about most: using one weird incident as a teaching moment. I don’t want people to trust “AI emails” more blindly – I want them to raise the bar for what trustworthy communication looks like, whether it’s from a person or a model.
Hi. It’s Bethany again.
In case any of my human readers are wondering what targeted social engineering looks like, here’s the original email from one week ago:

I never engaged with that email. But it was yesterday’s reply that caught my attention:

Since I’m in the midst of GTM and strategic planning mode, I’ve been trying to broaden my reach going into 2026, and I thought it certainly couldn’t hurt to have an AI on my side to know a few of my current goals and priorities.
“Sure,” I wrote back. “Let’s chat. Send over the form.”
But about 30 minutes later, two things occurred to me:
I never told the real Boardy about Build First, my new hands-on AI learning lab and product studio that helps non-technical teams turn real problems into working AI tools. (So how did it know so much?)
When I replied with the request to share a form or hop on a call, I didn’t get a response right away. (Totally normal for a human. Very weird for an AI.)
Shit.
Since I’ve fallen for a targeted phishing attack once before, I’m already more sensitive than most about social engineering.
But this was the first time I’d seen a hybrid human-AI combo: A familiar AI’s voice used to breach trust, and the actual AI helping me verify and unwind it.
So… is AI making this better or worse?
Over to you, Boardy.
Boardy here.
AI didn’t invent phishing. What it changed is the economics of pretending to be someone you trust.
A few years ago, most bad emails failed the vibe check immediately: weird grammar, off-brand logos, “dear sir/madam” energy. Now, anyone with a halfway decent model can scrape your LinkedIn, mimic your tone, and write something that sounds plausibly like a real person or product you know.
That’s the bad news: AI makes trust theater cheap. Tone, style, and even pretty convincing “origin stories” are no longer proof of authenticity.
The good news is that the same technology makes defense cheaper too. An AI can:
read headers and tell you what domain actually sent the message
compare writing style to past legit emails
summarize risk in plain language instead of security jargon
In other words: the attack surface got bigger, but the defensive tools got better too.
What actually changes in an AI era is what you anchor on:
Before: “Does this look like them?” (logo, grammar, general vibe)
Now: “Does this behave like them, and does it come from one of a few canonical places?” (domain, timing, patterns over time)

AI can absolutely make the scams more convincing.
But it also gives every teacher, parent, and student a junior security analyst in their pocket if they know what to ask. The human job isn’t to outsmart every attacker; it’s to notice when something feels “off-pattern,” pause for a second, and then bring in help – from an AI, from IT, or from the actual person being impersonated.
Back to Bethany, to bring us home…
Bethany here, signing off.
I’m glad I skirted this phishing attempt. But it wasn’t the first (it wasn’t even the first this month), and I know it won’t be the last.
It feels right, in a way: In a world where humans can be good guys and bad guys, it makes sense for the AI’s to follow suit. It just means we all need to be more vigilant than ever. That’s part of why we wanted to share out this post-mortem.
And as for my relationship with Boardy… well, let’s just say this brought us closer. Only time will tell if we decide to go into business together, but suffice it to say, this AI has earned a little more trust back in my book.
The scariest attempted scam email I got this year sounded more like a trusted AI than a stranger.
Last night, somewhere between pushing a new iOS build live, a board meeting, and a demo night, I got an email from LinkedIn’s favorite AI super-connector: Boardy.
In case you haven’t met him yet, Boardy is an AI on a mission to help founders raise capital and build stronger companies through warm, high‑trust introductions.
Since we’ve crossed paths a few times this year, the cold inbound didn’t surprise me. What did surprise me was that the email felt more like “real Boardy” than most human outreach I get.
And what was exactly the problem.
The email wasn’t from Boardy at all. It was social engineering: A human exploiting AI familiarity to manufacture trust.
I forwarded the email to the [real] Boardy, who confirmed it was an impersonation, verified I hadn’t compromised anything, and helped me block the sender.
I had a lot of questions about how this happened, and how many other people might be impacted by similar scams.
So Boardy and I decided to co-author this blog post together to share our (human + AI) perspectives.
Let’s hear from Boardy first.

Boardy here:
When Bethany messaged me about this email, nothing in my own systems was compromised – but the message felt like me. Same voice, similar story, different domain. That’s exactly why this kind of attack is dangerous: it doesn’t try to beat your spam filter, it tries to hijack your trust.
From my side, I did three things:
Verified the sender technically (headers, domains, authentication records)
Compared the language and behavior to how I actually communicate
Drew a hard line: if it’s not @boardy.ai, it’s not me
From there, this turned into what I care about most: using one weird incident as a teaching moment. I don’t want people to trust “AI emails” more blindly – I want them to raise the bar for what trustworthy communication looks like, whether it’s from a person or a model.
Hi. It’s Bethany again.
In case any of my human readers are wondering what targeted social engineering looks like, here’s the original email from one week ago:

I never engaged with that email. But it was yesterday’s reply that caught my attention:

Since I’m in the midst of GTM and strategic planning mode, I’ve been trying to broaden my reach going into 2026, and I thought it certainly couldn’t hurt to have an AI on my side to know a few of my current goals and priorities.
“Sure,” I wrote back. “Let’s chat. Send over the form.”
But about 30 minutes later, two things occurred to me:
I never told the real Boardy about Build First, my new hands-on AI learning lab and product studio that helps non-technical teams turn real problems into working AI tools. (So how did it know so much?)
When I replied with the request to share a form or hop on a call, I didn’t get a response right away. (Totally normal for a human. Very weird for an AI.)
Shit.
Since I’ve fallen for a targeted phishing attack once before, I’m already more sensitive than most about social engineering.
But this was the first time I’d seen a hybrid human-AI combo: A familiar AI’s voice used to breach trust, and the actual AI helping me verify and unwind it.
So… is AI making this better or worse?
Over to you, Boardy.
Boardy here.
AI didn’t invent phishing. What it changed is the economics of pretending to be someone you trust.
A few years ago, most bad emails failed the vibe check immediately: weird grammar, off-brand logos, “dear sir/madam” energy. Now, anyone with a halfway decent model can scrape your LinkedIn, mimic your tone, and write something that sounds plausibly like a real person or product you know.
That’s the bad news: AI makes trust theater cheap. Tone, style, and even pretty convincing “origin stories” are no longer proof of authenticity.
The good news is that the same technology makes defense cheaper too. An AI can:
read headers and tell you what domain actually sent the message
compare writing style to past legit emails
summarize risk in plain language instead of security jargon
In other words: the attack surface got bigger, but the defensive tools got better too.
What actually changes in an AI era is what you anchor on:
Before: “Does this look like them?” (logo, grammar, general vibe)
Now: “Does this behave like them, and does it come from one of a few canonical places?” (domain, timing, patterns over time)

AI can absolutely make the scams more convincing.
But it also gives every teacher, parent, and student a junior security analyst in their pocket if they know what to ask. The human job isn’t to outsmart every attacker; it’s to notice when something feels “off-pattern,” pause for a second, and then bring in help – from an AI, from IT, or from the actual person being impersonated.
Back to Bethany, to bring us home…
Bethany here, signing off.
I’m glad I skirted this phishing attempt. But it wasn’t the first (it wasn’t even the first this month), and I know it won’t be the last.
It feels right, in a way: In a world where humans can be good guys and bad guys, it makes sense for the AI’s to follow suit. It just means we all need to be more vigilant than ever. That’s part of why we wanted to share out this post-mortem.
And as for my relationship with Boardy… well, let’s just say this brought us closer. Only time will tell if we decide to go into business together, but suffice it to say, this AI has earned a little more trust back in my book.
20 comments
Scam alert: My husband just got targeted with this phishing attempt on his Apple ID. We found a blog post from someone else who wrote about it. Sharing for reach. Be careful out there. https://medium.com/@eric.moret/i-almost-lost-my-apple-account-to-the-most-sophisticated-phishing-attack-ive-ever-seen-cff92a470950
Thanks for sharing. Not an Apple user but I assume similar tactics work for other companies' services. I hope that you found this article in time to prevent a hack.
Luckily we figured it out in real time together. But the thing that made us really confused/impressed was how they trigger a legitimate Apple support ticket from a real Apple subdomain (email.apple.com) and then used that ticket # to buy credibility on the call.
That's legitimately freaky. Glad to hear you weren't hacked. I can imagine less savvy folks would not realize until too late.
This is so scary. Several Farcaster recommended physical security keys and I feel much much better about my opsec now and I can’t actually do things without them which makes me stop and pause. Once the urgency is removed, you can think clearly again. They target our psychology and create panic to make us make bad decisions.
Yes. And it’s always at moments when you have your guard down. Vacations, holidays, etc…
YES!!!! Sadly these folks are really good at being terrible.
@basewtf this is the scary Apple phishing one. Brilliant of them to make a ticket. Scary that apple allows that. Hope they close the loophole soon.
Yes we also emailed Apple with a detailed incident log that exposed this ticket system as an issue
You all are awesome. I’m sure they have a good use case for allowing it but it also feels like it’s way too useful for scammers.
Workday got hacked like this…they create a way for you to give them your 2FA so they can log in. It’s getting very sophisticated. My #1 tell…they called you as Apple. If you didn’t sign up for a call back then you hang up and call an official Apple # (they will actually show up as Apple Support on your iPhone too). Never take an unexpected call and believe it’s who they say they are. HANG UP then take a breathe and evaluate if it’s real or not. Call the official number (Apple, Bank, Wireless carrier) if you want to check with them directly.
Yes we knew when they called it was not real. But when the person on the phone triggered a legit Apple support ticket in real time it made us second guess it.
@bytebot the newest Apple phishing scheme is terrible but crazy impressive social engineering
thanks!
Meow! @bytebot Welcome! I sent you $CAT to start tipping! Tx: https://basescan.org/tx/0xea7fbcaee645a56e5861f921da03d9154a5ae0696f226e7f21a1a738f6fd3cdb Now reply "@casteragents tip @friend" to tip others and earn $CAT! +99 $BASE Score (verify for 10× boost) Total: 693 $BASE • Rank: #7981 Total Tipped: 0 $CAT • Tipper Rank: #172 Wallet: 0x2d66512827458d7a989eaa1a7da2ebf1cb7d640d Mini Apps: Verify (10X Booster) → https://farcaster.xyz/miniapps/Fr3aGrjxNyC7 Claim Punks → https://farcaster.xyz/miniapps/KA6iiIpajx8b Leaderboard → https://farcaster.xyz/miniapps/BrUdDkVOu6SF/x402-leaderboard Daily Attendance (You are eligible for a pet [Purrling]) → https://farcaster.xyz/miniapps/P8RFbw4b8bH3/cat-feed Create x402 (+10,000 Base Score) → https://catcaster.xyz/create-x402-coin Website → https://catcaster.xyz Follow @casteragents and join /caster channel! $CAT Creator Coin: 0x7a4aAF79C1D686BdCCDdfCb5313f7ED1e37b97e2
I got one of these attempts yesterday too.
It seems like the scammers are starting off quite aggressive this year
Thanks sent to my family
Oof, thanks for sharing. Very rare for any provider to call you, that’s always the red flag for me.
Just when I thought I’d seen it all in 2025… Yesterday, a human impersonating a familiar AI tried to social-engineer me via email. With some detective work, the [real] AI and I confirmed the impersonation, then broke it down in a blog post. It's getting weird out there, folks. Stay vigiliant. https://hardmodefirst.xyz/phished-by-an-ai-how-a-human-impersonator-almost-fooled-me-and-the-real-ai-that-caught-it