[00:00]
Ray: Welcome to podcast 7. I am Ray.
Ashley: And I am Ashley. Hello.
Ray: All right, picture this. It's a 5:30 p.m. For most people, that's it. That's the finish line. You've dealt with the email avalanche, the Slack notifications going off constantly, the, you know, the back-to-back Zoom—
Ashley: Terrible day. Yeah.
Ray: You close your laptop, you get that little magnetic click, and your brain just switches off. We're done. That is the universal signal for, okay, my time now. The workday is over.
Ashley: Usually. Yeah.
Ray: But we're looking at a source today that suggests for the most efficient people—I mean, the top 1% of the 1%—that click isn't the end.
Ashley: No, it's the starting gun. It's the beginning of a second shift. And it's a shift that happens, you know, completely while they're sleeping.
Ray: And let's be super clear up front. We are not talking about outsourcing your work to a VA in another time zone.
Ashley: No, not at all. Or like checking emails at midnight. This is about what's being called the night shift of AI. But—and this is really the core of what we need to unpack—it's not using AI the way 90% of us are using it right now.
Ray: Exactly. This isn't about, you know, asking ChatGPT to write a poem or summarize a long email. We are going deep on a methodology published by Mitchell Hashimoto.
Ashley: And for anyone who doesn't live deep in the DevOps world, Hashimoto is not just another tech influencer. He's a real heavyweight—an engineer's engineer.
Ray: He founded HashiCorp, right? I mean, he built tools like Vagrant and Terraform that basically run the entire modern internet. If you use the cloud, you are somewhere down the line using his code.
Ashley: And he's also—I found this fascinating—a licensed pilot, flies a vision jet, which sounds like a, you know, a fun fact to just throw in there, but it's actually really relevant to his whole approach.
Ray: How so?
Ashley: Pilots are obsessed with checklists, with systems, with what they call safety envelopes. They don't just wing it. And that is exactly how he's treating AI. It's a complex system, not a magic wand.
Ray: So, the mission for this deep dive is to get you, the listener, from what Hashimoto calls the "chatbot phase"—which, let's be honest, is where most of us are stuck—
[03:05]
Ashley: Oh, for sure.
Ray: —to the agent phase. We're going to break down his six-step process for building a workflow that basically runs 24/7.
Ashley: It's the difference between treating AI like a, you know, a slightly better search engine that you have to constantly babysit and treating it like an employee you can trust to work overnight.
Ray: Okay, so let's start with that status quo because I think a lot of people are going to see themselves here. Step one in his journey is "drop the chatbot," which sounds so counterintuitive, right? The chatbot, the ChatGPT window—that's how the entire world got hooked on this stuff.
Ashley: It is. It's the magic phase. You type a question, you get an answer. Feels incredible. But Hashimoto argues that interface is actually a trap for any kind of professional, serious work. He describes this one aha moment: he took a screenshot of a command palette from a code editor, just pasted it into Gemini, and said, "Recreate this for me in SwiftUI."
Ray: And it worked.
Ashley: Boom. It just worked instantly. He had functioning code.
Ray: That's the dopamine hit. You think, "I am a genius. I have superpowers." But then he tried to apply that same magic to a brownfield project.
Ashley: Brownfield project. Let's define that. Not everyone listening is a software architect.
Ray: Sure. So Greenfield is starting from scratch. Blank page, no history, no baggage. AI is great at that, right? A brownfield project is—well, it's real life. It's a massive existing codebase with years of history, weird quirks, legacy junk all over the place.
Ashley: So basically the messy reality of our jobs.
Ray: Exactly. And when he tried to use the chatbot for that, it just—it fell apart. He found himself in this horrible loop of copy-paste, copy-paste.
Ashley: Okay. So what does that look like?
Ray: Copy code from his editor, paste it into the bot, ask for a fix, bot gives a fix, he pastes it back into his code, it breaks something else. He pastes the new error message back to the bot. He said he was spending 20 minutes on a 5-minute task. He was the middleware between the code and the AI.
Ashley: And that is the definition of inefficiency. He realized that the copy-paste loop was slower than just doing it himself.
Ray: So the big realization isn't "AI is overhyped." It's that chat is the wrong interface for complex work.
Ashley: That's the pivot—the move to agent.
Ray: Okay. So what's the difference? How does he define an agent versus a chatbot? This is the key: a chatbot is passive. It just sits there waiting for you to type. An agent—an agent has a loop. It has permissions. It can read your files. It can execute programs. It can make HTTP requests.
Ashley: Hands. It has hands. That's a perfect way to put it. It can actually do the work, not just talk about it. But giving it hands didn't just magically solve everything.
Ray: Not even close. In fact, it led him to step two, which, honestly, sounds like a form of self-torture.
[06:15]
Ashley: We're talking about "reproduce your own work."
Ray: Yes. And this is the part where almost everyone would quit. For weeks, Hashimoto did his entire job twice.
Ashley: Walk us through that. Why on earth would a productivity guru do everything two times over?
Ray: So he'd solve a really hard engineering problem manually—fix a bug, write a new feature—the normal way.
Ashley: The normal way.
Ray: Then he would literally delete all his work, fire up an agent, and try to force the agent to get the exact same result without ever seeing his solution.
Ashley: Wow. He describes that as excruciating. I can see why.
Ray: It is. But the goal wasn't to be fast. It was to calibrate his trust. In aviation, they call it finding the "stall speed." You have to know the exact point where the plane stops flying and starts falling out of the sky.
Ashley: So, he was intentionally pushing the AI until it failed, over and over, just to see the limits.
Ray: Okay. "It's great at writing boilerplate code, but it totally chokes when I ask it to refactor a database schema."
Ashley: Precisely. He calls this harness engineering. By failing repeatedly, he built this mental map of the tool's boundaries. Most people just skip this step.
Ray: Yeah. You try it once, it fails, and you say, "See, AI is just hype."
Ashley: Exactly. Or worse, you trust it too much, and it breaks production. He put in the reps to know exactly what he could and couldn't delegate.
Ray: That's a huge insight. You can't have the shortcut without putting in the calibration work first. You have to earn the efficiency. And once he earned it, he unlocked step three. And this—this is the part that I think is a game changer for every single person listening.
Ashley: Okay, this is "end of day agents."
Ray: Yep. Or as you called it, the sunset kickoff. I like that. So, the 30-minute rule—this is it. Hashimoto blocks out the last half hour of his workday, but he's not using it to, you know, wrap up loose ends or fire off a few last emails—
Ashley: —which is what everyone else does. We're all frantically trying to clear the inbox so we can close the laptop without feeling guilty.
Ray: Right. He is acting like a manager. He's assigning tasks for the night shift. He spends that 30 minutes just queuing up agents to run while he's offline. The logic is just—it's brilliant. Don't use your active high-energy hours to do the work. Use them to set up work for the hours you don't have.
Ashley: It's leverage. Pure, simple leverage. Okay, let's make this real. What kind of work are these agents actually doing overnight?
Ray: One of the main ones is deep research. So, let's say he's thinking about using a new software library. The old way, he'd have to spend what, four hours reading documentation, checking licenses, looking at GitHub stars, scrolling through Reddit threads to see if people hate it.
[09:20]
Ashley: That's a whole morning gone. And by the time you're done with the research, you're too drained to actually do the coding.
Ray: Exactly. So instead, at 5:30 p.m., he writes a prompt. Something like, "Scan the ecosystem for libraries that solve problem X. Filter them for MIT licenses only. Summarize the pros and cons of the top five. Check social media for recent complaints. Compile a briefing." And then he closes his laptop.
Ashley: He goes to dinner. He sleeps. The agents run for hours, hitting dozens of websites, processing thousands of tokens of data, and when he wakes up at 8:00 a.m., the briefing is just waiting for him.
Ray: This is what he calls the "warm start." And it's such a huge psychological shift. Because usually the morning is the hardest part of the day. You're facing a blank slate—
Ashley: The cold start problem.
Ray: Right! It takes so much willpower just to get started. You stare at the screen, you get another coffee, you check Twitter. But with a warm start, you're not a creator anymore. You're an editor.
Ashley: Yes. You sit down with your coffee and you've got a document. "Here are the top three libraries just like you asked." Now your job is just to make a decision.
Ray: It's infinitely easier to edit than to create from scratch. And this applies just as much to go-to-market roles as it does to engineering. Think about it if you're a demand gen leader.
Ashley: Oh, I was just thinking that—account research.
Ray: 100%. It's 5:30 p.m. You've got a list of 50 target accounts. Do not spend your prime-time hours reading their press releases. No. Tell an agent: "Go to these 50 websites, find any mentions of digital transformation, look for new VP hires in the last month, rank them by intent signals," and you wake up to a prioritized hit list instead of a raw list of names.
Ashley: That is the difference between a tool and a teammate. He also brings up triage as a big one. How is that different from research?
Ray: Triage is more about managing the inbound flow. For him, it's GitHub issues—you know, bug reports, feature requests, complaints. It's a constant fire hose, and if you ignore it, the community gets mad. But if you read every single one, you don't get any other work done. So he has agents that read every new issue, but—and this is a really important detail—he does not let the AI respond.
Ashley: Why not? I mean, if it's smart enough to read it, why can't it just say, "Thanks, we got it"?
Ray: Hallucinations. He can't risk a bot promising a feature that's not on the roadmap or being rude to a contributor. He just wants the agent to categorize: "This is urgent. This is a duplicate. This seems like spam."
Ashley: So when he logs on in the morning, he sees the clean, organized pile, not the messy one.
Ray: It's filtering the signal from the noise. You know, all this talk about filtering signals and building these automated workflows, it really highlights a gap. People know what they want to do. They want that warm start. But actually building these GTM workflows is really hard.
Ashley: It is. It's a huge leap from just playing with ChatGPT to building a system that reliably scrapes 50 websites and ranks them.
[SPONSOR] If you're listening and thinking, "I want that for my revenue team," you should really check out the work being done at Demand7 and GTM7. Demand7.ai is focused on the actual demand gen side, finding those intent signals. And GTM7.ai is more on the execution side of things, building the "hands" for these agents. It's basically GTM engineering—taking the stuff from theory to actual automated systems that run. Definitely worth a look if you want to stop just using bots and start building engines.
[12:35]
Ray: So, back to Hashimoto's journey. We have the night shift, we have the warm start, but then he brings up this idea that I thought was so cool: parallel exploration.
Ashley: This one counters the idea that you should only use AI for things you already know you need. He realized that since agent time is so cheap—I mean, it's practically free compared to human time—he can afford to "waste" it on what he calls "unknown unknowns."
Ray: Explain that. Wasting time usually sounds like a bad thing.
Ashley: Think about your own work. How many times have you had a vague idea like, "Hmm, I wonder if there's a better way to structure this database," but you don't look into it because, you know, it's a 4-hour rabbit hole that might lead absolutely nowhere?
Ray: All the time. I stick to the safe path because I have deadlines. I can't justify that kind of speculative work.
Ashley: Exactly. Human curiosity is expensive, but robot curiosity is cheap. So, he'll spin up three different agents to investigate three different vague hunches overnight.
Ray: He calls it parallel processing his curiosity.
Ashley: Yes. And he doesn't expect them all to work. In fact, he kind of expects two of them to fail completely. But maybe one finds a library he'd never heard of. Maybe another finds a brilliant workaround.
Ray: It's like sending scouts out in a video game. Most find nothing, but one finds the gold mine. And for marketing, this is the ultimate A/B/C test. Don't just write one campaign. Tell the agent: "Draft three totally different strategies for this launch. A is FOMO. B is technical superiority. C is pure humor. Research the audience and write the copy for all three."
Ashley: And you wake up to three fully fleshed-out paths. You didn't write them. You just edited them. And maybe the humor one is awful. Who cares? It cost you zero minutes of your time. You just delete it and focus on what he calls the slam dunks.
Ray: "Outsource the slam dunks." That's the idea. Once you've done that painful training from step two and you know what the agent is good at, you just hand that work off completely. But—and there's always a but—when you start scaling this up, you hit a new problem: noise.
Ashley: Which brings us to step five. He calls it "notification poison." And he is super strict about this. His rule is: turn off agent desktop notifications.
Ray: Which is so interesting, right? Because when you hire a human assistant, you want them to ping you. "Hey, I finished that task. Hey, I have a quick question."
Ashley: And that's what destroys your focus. If you have an agent running and it pops up a notification every 15 minutes, you're not doing deep work. You're just babysitting a robot. He says, "It is my job as a human to be in control of when I interrupt the agent, not the other way around."
Ray: That's a critical point for mental hygiene. The agent is a background process. You are the main one. You check on it when you're ready. It lets him focus on the really hard stuff, the creative architectural thinking, while his robot friend just chugs along in silence.
[15:40]
Ashley: There's another really practical nugget in this phase that I want to pull out. He talks about an AGENTS.md file.
Ray: Oh, this is so simple and so brilliant. It's the "employee handbook" concept.
Ashley: Break that down.
Ray: Okay, so when an agent makes a mistake—let's say it tries to use a software library that's been obsolete for 3 years—most of us would just go into the chat and say, "No, don't use that. Use the new one."
Ashley: Right? You fix the immediate error.
Ray: But then next week, you start a new chat and the agent makes the exact same mistake again because it has no memory of the last conversation.
Ashley: You're trapped in this Groundhog Day loop.
Ray: So, what Hashimoto does is he stops. He doesn't just correct the chat. He goes to a specific file in his project called AGENTS.md and he writes a rule context: "When doing database migrations, rule: never use library X, always use library Y."
Ashley: He's updating the system prompt. He's documenting the institutional knowledge.
Ray: So every single time a new agent spins up for that project, the first thing it does is read that file. It's like a new hire reading the onboarding docs. So he's not fixing the instance, he's fixing the system.
Ashley: That is the takeaway. If you're just correcting typos, you'll be correcting them forever. If you update the style guide, the problem disappears. And over time, that AGENTS.md file becomes this incredibly valuable asset. The AI gets smarter about his specific context every day.
Ray: It's an investment. I love that. So, where does this all lead? Step six is "always have an agent running."
Ashley: That's the north star. That's the ambition. But he's also really honest about it. He says he's only there maybe 10 to 20% of the time right now.
Ray: I really appreciate that honesty. He's not pretending he's Iron Man yet. He uses slower, more thoughtful models. He mentions using these deep thinking models that might take 30 minutes just to think before they even write a line of code. He's not looking for instant gratification anymore. He's looking for quality work.
Ashley: And the mindset shift is constant. He's always asking himself, "Is there something an agent could be doing for me right now?" It's like a background thread in his own brain. "I'm stepping away for lunch. Can I cue something up? I'm stuck in a boring meeting. What could be researching while I'm on mute?"
Ray: Exactly. But there's one last point he makes that I think is so important because a lot of people are scared of this. They're afraid of deskilling.
Ashley: The fear that if the robot does all the easy work, we'll forget how to do it?
Ray: Right? "If I never write a simple SQL query again, will I forget SQL?" And Hashimoto argues the complete opposite. He sees himself as a software craftsman. He loves the work. He's not trying to automate himself out of a job.
Ashley: He wants to focus on the parts of the job he actually enjoys. Yes! By outsourcing all the slam dunks, the boilerplate, the tedious research, the triaging, he frees up his brain power for the incredibly hard stuff, the novel problem solving.
Ray: So he's not deskkilling, he's upskilling. He's moving up the value stack. He's operating at a higher level of abstraction. He's the conductor of the orchestra, not just the guy playing the triangle.
[18:50]
Ashley: Okay, let's recap this whole journey because these are really clear steps you can start taking. Step one: drop the chatbot. Stop the copy-paste madness.
Ray: Step two: embrace the pain. Force the agent to reproduce your work until you find its stall speed.
Ashley: Step three: the sunset kickoff. Use the last 30 minutes of your day to queue up that night shift.
Ray: Step four: Parallel exploration. Use cheap robot curiosity to explore expensive human ideas.
Ashley: Step five: kill the notifications and build the handbook. Write that AGENTS.md file so you stop repeating yourself.
Ray: And finally, step six: aim for "always on." It completely changes the definition of what a productive day is. It's not just what you finished; it's what you queued up.
Ashley: Which brings us to the provocation for you to think about. We've been talking a lot about developers, but this applies to everyone in the knowledge economy.
Ray: Absolutely.
Ashley: So, here's the question: If your lead-gen engine or your research process or your workflow stops running the moment you close your laptop, are you really leveraging AI or are you just using a slightly faster typewriter?
Ray: That's the one. If it sleeps when you sleep, it's a tool. If it works when you sleep, it's an asset. Think about that tonight when you go to close that lid. Is there something you could kick off?
Ashley: Make that last 30 minutes count. We'd love to hear how you're using this. Are you trying out end-of-day agents? What does your night shift look like? Come join the conversation over at podcast7.ai.
Ray: We'll see you there. Thanks for listening to this deep dive. We'll catch you on the next one.