<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.10.0">Jekyll</generator><link href="https://francisfuzz.com/feed.xml" rel="self" type="application/atom+xml" /><link href="https://francisfuzz.com/" rel="alternate" type="text/html" /><updated>2026-04-16T14:01:12+00:00</updated><id>https://francisfuzz.com/feed.xml</id><title type="html">francisfuzz.com</title><subtitle>Francis&apos; personal portfolio</subtitle><author><name>Francis Batac</name></author><entry><title type="html">Five learnings from Claude Code San Diego</title><link href="https://francisfuzz.com/posts/2026/04/14/10-learnings-from-claude-code-san-diego/" rel="alternate" type="text/html" title="Five learnings from Claude Code San Diego" /><published>2026-04-14T00:00:00+00:00</published><updated>2026-04-14T00:00:00+00:00</updated><id>https://francisfuzz.com/posts/2026/04/14/10-learnings-from-claude-code-san-diego</id><content type="html" xml:base="https://francisfuzz.com/posts/2026/04/14/10-learnings-from-claude-code-san-diego/"><![CDATA[<p>I attended Claude Code San Diego on April 14, 2026. Here are 5 things I took away from the various presenters.</p>

<ul>
  <li>Every presenter converged on the same pattern from different angles: iterating quickly towards product-market fit, leveraging strengths as a domain expert, and listening closely to what actually needs solving rather than getting caught up with the tooling.</li>
  <li>Learn to understand pain better than the builders do. That’s where the leverage is when combined with agentic tooling.</li>
  <li>The people winning right now are those who actively collapse the distance between “I understand this problem” and “I built the solution.” Sharpening these tools daily, even for a few hours, compounds quickly.</li>
  <li>Keep your orchestration agent lean and leverage subagents to optimize your context window. See my <a href="https://github.com/francisfuzz/dotfiles/blob/99a8c1bdeb9eb31cf1e9c64acb2b0296ee996dad/AGENTS.md">AGENTS.md @ <code class="language-plaintext highlighter-rouge">99a8c1b</code></a> for my latest thinking and application on this.</li>
  <li>Sign up early to present at the next round of lightning talks.</li>
</ul>]]></content><author><name>Francis Batac</name></author><category term="claude-code" /><category term="learnings" /><summary type="html"><![CDATA[Five takeaways from Claude Code San Diego: domain expertise, collapsing the gap between understanding and building, and keeping your agentic setup lean.]]></summary></entry><entry><title type="html">Building a second brain</title><link href="https://francisfuzz.com/posts/2026/04/06/building-a-second-brain/" rel="alternate" type="text/html" title="Building a second brain" /><published>2026-04-06T00:00:00+00:00</published><updated>2026-04-06T00:00:00+00:00</updated><id>https://francisfuzz.com/posts/2026/04/06/building-a-second-brain</id><content type="html" xml:base="https://francisfuzz.com/posts/2026/04/06/building-a-second-brain/"><![CDATA[<p>I’ve cycled through many writing mediums—notebooks, Apple Notes, Notion, Evernote, Google Docs, and paper. It wasn’t until December 2025 that I settled on a Git repository as my primary system for logging thoughts, work, and learnings.</p>

<p>The shift came after reading <a href="https://jonmagic.com/posts/how-i-work-2025-edition/">Jon Magic’s “How I Work, 2025 Edition”</a> and revisiting <a href="https://ben.balter.com/2015/11/12/why-urls/">Ben Balter’s “Why everything should have a URL”</a>. As someone who’s worked with Git since 2012, the transition felt natural—directing my working memory into version control rather than scattered notes apps.</p>

<p>The immediate benefit: performance reviews and accountability. Instead of scrambling to remember what happened in the last quarter, I had a structured log. Over the past few months, I’ve used this system to understand my own processes better.</p>

<h2 id="current-structure-april-2026">Current Structure (April 2026)</h2>

<ul>
  <li><strong>Daily Projects</strong> — Day-level focus logs and context, organized by date</li>
  <li><strong>Weekly Notes</strong> — Planning, goals, and backlinks; weekly anchors for reflection</li>
  <li><strong>Meeting Notes</strong> — Conversations with timestamps and action items</li>
  <li><strong>Snippets</strong> — Weekly accomplishment summaries (Ships, Collabs, Risks, etc.)</li>
  <li><strong>Executive Summaries</strong> — Distilled updates for leadership</li>
  <li><strong>Projects</strong> — Multi-week initiatives with milestones and resources</li>
  <li><strong>Self</strong> — Personal context: assessments, performance reviews, goals, personality insights</li>
  <li><strong>Feedback, Transcripts, Templates, Archive</strong> — Supporting systems</li>
</ul>

<h3 id="automation-skills">Automation Skills</h3>

<p>I’ve built several agent skills to power the workflow:</p>

<ul>
  <li>Creating daily notes</li>
  <li>Summarizing weekly notes</li>
  <li>Transforming meeting transcripts into reusable artifacts</li>
  <li>PR review assist: tracking reviews and reusable context</li>
</ul>

<h2 id="what-the-git-log-reveals">What the Git Log Reveals</h2>

<p>The real insight came from studying the corpus itself. I’ve used the historical record to crawl my writing, build a personal writing style guide (packaged as an agent skill), and link ideas across contexts. But the most revealing discovery was what Git captures <em>unintentionally</em>.</p>

<h3 id="the-commit-graph-as-data">The Commit Graph as Data</h3>

<p>The content of each note is intentional—I chose every word. But the commit graph isn’t. The timestamps, cadence, and gaps accumulated without my direction.</p>

<p><strong>The gaps are information too.</strong> What doesn’t appear in the commit history is as expressive as what does.</p>

<ul>
  <li>Weekends are nearly silent (intentional design working as intended)</li>
  <li>The system is almost entirely optimized for <em>capture</em>, not <em>retrieval testing</em>. Writing something down is treated as equivalent to knowing it. The implicit bet: <code class="language-plaintext highlighter-rouge">git log</code> itself is the retrieval mechanism. When I need to reconstruct February, I run the log, filter by date, and follow the breadcrumbs.</li>
</ul>

<h3 id="time-made-legible">Time Made Legible</h3>

<p>It’s easy to treat version control as a technical requirement—something developers use because that’s what we do. But in a knowledge system, Git does something different: it makes time legible.</p>

<ul>
  <li>Every commit is an unfakeable timestamp</li>
  <li>Every commit message is a claim about intent</li>
  <li>Every diff is the delta between two states of mind</li>
  <li>The full history is preserved—not edited, not summarized, not lost</li>
</ul>

<p>You can go back and see not just <em>what</em> you thought, but <em>when</em> you thought it and <em>how confident you were</em>.</p>

<p><strong>This is what separates it from a notes app.</strong> Notes apps store content. Git stores content <em>and</em> the progression of content over time. The progression is often more valuable than any single snapshot.</p>

<h2 id="where-this-is-headed">Where This Is Headed</h2>

<p>The system is alive and imperfect. Some weeks are captured in detail; others get a single setup commit and nothing more. The structure has evolved—directories renamed, skills removed, workflows refactored. That evolution is documented too (which is the point).</p>

<p>The goal was never perfection. It was to build a system that improves through use—and that leaves enough of a trail that anyone paying close attention can see the shape of the work over time.</p>

<hr />

<p><strong>Resources:</strong></p>
<ul>
  <li><a href="https://github.com/francisfuzz/second-brain-template">second-brain-template (open source)</a></li>
</ul>]]></content><author><name>Francis Batac</name></author><category term="project" /><summary type="html"><![CDATA[Extending my contexts using Git + agents — and what the log reveals that I never intended to show]]></summary></entry><entry><title type="html">Building tariff-everywhere: One Dataset, Many Interfaces</title><link href="https://francisfuzz.com/posts/2026/03/26/tariff-everywhere/" rel="alternate" type="text/html" title="Building tariff-everywhere: One Dataset, Many Interfaces" /><published>2026-03-26T00:00:00+00:00</published><updated>2026-03-26T00:00:00+00:00</updated><id>https://francisfuzz.com/posts/2026/03/26/tariff-everywhere</id><content type="html" xml:base="https://francisfuzz.com/posts/2026/03/26/tariff-everywhere/"><![CDATA[<p>After hearing about US Tariff Codes so much on the news, I wanted to understand them for myself. I decided to spend some time digging into the tariff codes and using Claude Code to organize them in a meaningful way.</p>

<p>What started as a small project to store tariff data in a SQL database and make it queryable through a CLI grew into something much broader: an MCP server, a <a href="https://tariff-everywhere.fly.dev/">publicly searchable index</a> powered by <a href="https://datasette.io/">Datasette</a> and <a href="http://fly.io/">Fly.io</a>, and an exported Python library that developers can integrate into their code (though at time of writing, not published to any registry–but certainly possible!). It’s all published as an <a href="https://github.com/francisfuzz/tariff-everywhere">open source repository at <code class="language-plaintext highlighter-rouge">francisfuzz/tariff-everywhere</code></a>.</p>

<p>Give it a try: <a href="https://tariff-everywhere.fly.dev/">https://tariff-everywhere.fly.dev/</a></p>

<h2 id="how-it-started">How it started</h2>

<p>Straight from <a href="https://hts.usitc.gov/">the United States International Trade Commission’s site</a>:</p>

<blockquote>
  <p>The Harmonized Tariff Schedule of the United States (HTS) sets out the tariff rates and statistical categories for all merchandise imported into the United States. The HTS is based on the international Harmonized System, which is the global system of nomenclature applied to most world trade in goods.</p>
</blockquote>

<p>When I started exploring the US International Trade Commission’s API, I was just trying to understand what was there. The public endpoint was less documented than I expected, and the original plan I’d sketched referenced an endpoint that no longer worked. That initial confusion actually shaped everything that came after.</p>

<p>The real API turned out to be simpler than I’d feared: a flat JSON feed returning about 28,750 tariff entries across 99 chapters. No pagination helpers, no release versioning—just straightforward data. A chapter-based ingest pattern was a natural fit: download one chapter at a time, parse it, store it. It scales linearly and gives you natural checkpoints.</p>

<p>I documented these learnings in early commits because I knew future me would forget the dead ends. 🙈</p>

<h2 id="building-three-layers">Building Three Layers</h2>

<p>Once I understood the API shape, I needed to figure out how to make this data actually useful. I wanted it to work in three contexts: from the command line for developers, as an MCP server for Claude, and eventually as a browsable interface. But first things first—I had to build the foundation.</p>

<p>The <a href="https://github.com/francisfuzz/tariff-everywhere/blob/main/scripts/ingest.py">ingest script</a> was straightforward: hit the API for all 99 chapters, parse the JSON, and store everything in SQLite. I created three tables—chapters, hts_entries, and data_freshness—to capture both the tariff data and metadata about when things were last checked. About 134,000 entries in total. It’s a lot of data, but SQLite handles it without breaking a sweat.</p>

<p>The CLI came next, built with <a href="https://typer.tiangolo.com/">Typer</a>. I put together commands for searching by keyword, looking up exact codes, browsing by chapter, and retrieving metadata. Then I built an MCP server that exposed the same queries over stdio for Claude integration. The MCP work taught me something about JSON handling—Claude pointed out that using <code class="language-plaintext highlighter-rouge">print()</code> directly instead of Rich’s console formatting keeps ANSI control characters out of the JSON output. A small detail, but one that matters for clean integration.</p>

<p><a href="https://github.com/francisfuzz/tariff-everywhere/tree/main/tests">Tests were important here too</a>. I built the test suite early, using in-memory SQLite fixtures so tests run fast and don’t depend on actual database state. That pattern paid off immediately when refactoring came later.</p>

<h2 id="the-freshness-problem">The Freshness Problem</h2>

<p>Once the initial ingest worked, I started thinking about what happens when tariff data changes. The USITC doesn’t expose revision numbers or release dates, so I needed another way to detect staleness. The solution was content hashing: hash each chapter, compare against what’s stored, and only re-ingest if something actually changed.</p>

<p>The <a href="https://github.com/francisfuzz/tariff-everywhere/blob/main/scripts/refresh.py">refresh script</a> handles this in parallel. It spins up a thread pool, hashes all 99 chapters at once, and compares the results to what’s in the database. If a chapter’s content differs, that chapter gets re-ingested. I also started tracking two timestamps per chapter: when we last checked and when the data actually changed. That distinction matters—it tells you whether something is truly stale or just old data you’ve already validated.</p>

<p>Before any refresh operation, <a href="https://github.com/francisfuzz/tariff-everywhere/blob/9ff4b05fb1edb3476c7040226d999651dfc9dfe7/scripts/refresh.py#L132-L147">the script creates a backup</a>. It’s a simple step, but it means if something goes wrong, you can recover. Defensive programming isn’t paranoia—it’s respecting that production systems have higher stakes than development. The backup costs almost nothing and buys a lot of peace of mind.</p>

<p>This is where I learned to distinguish between ingest and refresh. Ingest is destructive—it rebuilds everything from scratch. Refresh is careful—it validates, updates selectively, and preserves the database. They have different failure modes, and treating them as such made the system safer.</p>

<h2 id="stopping-to-refactor">Stopping to Refactor</h2>

<p>At some point, I noticed I was duplicating query logic between the CLI and the MCP server. Both needed the same database operations, just with different invocation patterns. This bothered me more than it should have, so I stopped feature work and extracted a shared core library—which later became an exportable Python module.</p>

<p>Creating <code class="language-plaintext highlighter-rouge">hts_core/</code> with a configurable database path meant both interfaces could import the same functions. Now when I need to change how queries work, I change them in one place. It’s the kind of refactoring that seems optional until you need to update something three months later and realize how grateful you are to past you.</p>

<p>I also hardened the Docker setup and added CI: GitHub Actions runs the test suite on every commit. Standard stuff, but it matters to know the project works reliably—and if it breaks, I’ll know immediately.</p>

<h2 id="the-datasette-pivot">The Datasette Pivot</h2>

<p>Claude suggested something that changed how I thought about the project: “What if we exposed this as a searchable web interface?” I’d been thinking CLI and MCP only, but that question opened something up. Why shouldn’t people be able to browse tariffs in a browser?</p>

<p>That’s how I ended up building Datasette integration. Datasette is remarkable—it lets you publish a SQLite database as a web interface without writing any web code. Point it at your database, and you have a searchable, browsable interface with full-text search on tariff descriptions. No Flask routes, no HTML templates, no API endpoints to maintain.</p>

<p>The integration taught me some hard lessons. If you create FTS5 indexes with raw SQL, Datasette won’t auto-detect them—but if you create them with <code class="language-plaintext highlighter-rouge">sqlite-utils</code>, Datasette sees them immediately. I learned that the hard way when I deployed the first version and search didn’t work. I also ran into a Typer/click compatibility issue that took some untangling.</p>

<p>Getting the chapter titles right was a small thing that mattered a lot. Instead of showing “Chapter 01,” the interface now shows “Live Animals” or “Copper and Articles Thereof.” Users see actual chapter names instead of numbers. Some entries have <code class="language-plaintext highlighter-rouge">&lt;i&gt;</code> tags for scientific names, so I installed <code class="language-plaintext highlighter-rouge">datasette-render-html</code> to make those render correctly instead of showing raw HTML.</p>

<p>This pivot—from API-only to browsable web interface—is probably the thing I’m most proud of. It made the tariff data accessible to people who don’t write code.</p>

<h2 id="getting-the-name-right">Getting the Name Right</h2>

<p>At some point it became clear that <code class="language-plaintext highlighter-rouge">usitc-app</code> was the wrong name. It was descriptive—it told you what API it used—but it didn’t communicate what the project did or why you’d want to use it. After thinking about what this thing really was, the name <code class="language-plaintext highlighter-rouge">tariff-everywhere</code> emerged: a lookup service you can use everywhere—in your terminal, in Claude, in a web browser. Anywhere you might need to understand a tariff code.</p>

<p>The rename was methodical: first the repository references, then the live web app URL, then the deployment configurations. I could have left stray references to the old name, but that’s the kind of thing that bugs future maintainers. If you’re going to rename something, rename it all the way through.</p>

<h2 id="documentation-and-licensing">Documentation and Licensing</h2>

<p>A project isn’t really done until someone else can use and maintain it. I rewrote the README to guide people through the three ways they can use tariff-everywhere: from the command line as a developer, as an MCP server with Claude, or as a web interface to just look something up. Each mode has its own documentation, all starting from the same place.</p>

<p>CLAUDE.md became the deep documentation: architecture, patterns, how to debug when things go wrong, how to deploy. I wrote it knowing that future work on this project will probably involve Claude, and whoever touches the code next should understand the decisions that were made.</p>

<p>I also chose the Hippocratic License—an open-source license that protects against the code being used to cause harm. I wanted the code to be open (that’s important to me), but I also wanted guardrails. The Hippocratic License gave me both.</p>

<p>Licensing and documentation are the things people don’t think about when they’re building, but they matter enormously for longevity. A project without documentation dies. A project without thoughtful licensing can end up in places you never intended.</p>

<h2 id="building-with-ai">Building With AI</h2>

<p>The interesting thing about this project is that Claude was a thinking partner the whole way through. When I was confused about the API, Claude helped me understand what I was looking at. When I missed an ANSI control character vulnerability, Claude caught it. When I was stuck in a CLI-only mindset, Claude suggested a web interface and changed the whole trajectory of the project.</p>

<p>The later commits show the work of preparing the repository for ongoing collaboration with Claude: adding <code class="language-plaintext highlighter-rouge">.claude/</code> to gitignore, documenting patterns and decisions in a way that makes sense to an AI reading the codebase, and revising the core instructions when we missed something. This created a nice loop of shipping, pivoting, and grounding the work. The <a href="https://github.com/francisfuzz/tariff-everywhere/commits/main/CLAUDE.md">full <code class="language-plaintext highlighter-rouge">CLAUDE.md</code> history</a> is telling in what we went back and forth on (Docker 😉).</p>

<p>This loop came up a lot in my mind and practice as we worked together:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Articulation → Discovery → Explain → Plan → Edit → Review → Refactor → Agent → Monitor → Learn
     ↑                                                                                      |
     └────────────────────── reflection loop ───────────────────────────────────────────────┘
</code></pre></div></div>

<hr />

<h2 id="what-this-whole-thing-taught-me">What This Whole Thing Taught Me</h2>

<p><strong>Defensive thinking matters.</strong> For every feature I added, I asked “what goes wrong?” first. Backups before mutations. Hashes to detect changes. Tests that run in isolation. None of it is glamorous, but it’s the difference between a project you can trust and one you can’t.</p>

<p><strong>Refactoring when you spot duplication pays dividends.</strong> The <code class="language-plaintext highlighter-rouge">hts_core/</code> extraction wasn’t required, but it meant later changes happened in one place instead of three.</p>

<p><strong>Naming is important.</strong> <code class="language-plaintext highlighter-rouge">usitc-app</code> was technically correct and completely unmemorable. <code class="language-plaintext highlighter-rouge">tariff-everywhere</code> tells you what the project does and how it works. Spend the time on names.</p>

<p><strong>Many modes of interaction beat one.</strong> A CLI for developers. An MCP server for Claude users. A web interface for everyone else. Same underlying code, three entry points. That’s good design.</p>

<p><strong>Documentation is not optional.</strong> Not as a checkbox, but because the next person to touch this code—including me, three months from now—needs to understand why decisions were made. CLAUDE.md isn’t a reference manual; it’s the paper trail of thinking.</p>

<h2 id="what-remains">What Remains</h2>

<p>The project is functional. All three modes work—CLI, MCP, web. Tests pass. The Datasette instance is live. The code is documented. The decisions are recorded. If I walked away today, someone could pick this up and maintain it. That feels complete—for now, until I come back to it.</p>

<p>What I’ve tried to do is leave the best gift a developer can leave: a codebase where decisions are explained, not just implemented. Where defensive patterns are intentional, not random. Where someone—whether that’s me in a few months or someone else entirely—can understand not just what the code does, but why it was built that way.</p>

<p>Give it a try: <a href="https://tariff-everywhere.fly.dev/">https://tariff-everywhere.fly.dev/</a></p>]]></content><author><name>Francis Batac</name></author><category term="project" /><summary type="html"><![CDATA[How I turned a public API into a CLI, MCP server, and searchable web interface—iteratively, with Claude as a thinking partner]]></summary></entry><entry><title type="html">Relearning Rust Through Play by building emoji_gen with Claude Code</title><link href="https://francisfuzz.com/posts/2025/12/27/relearning-rust-through-play-weekend-building-with-claude-code/" rel="alternate" type="text/html" title="Relearning Rust Through Play by building emoji_gen with Claude Code" /><published>2025-12-27T00:00:00+00:00</published><updated>2025-12-27T00:00:00+00:00</updated><id>https://francisfuzz.com/posts/2025/12/27/relearning-rust-through-play-weekend-building-with-claude-code</id><content type="html" xml:base="https://francisfuzz.com/posts/2025/12/27/relearning-rust-through-play-weekend-building-with-claude-code/"><![CDATA[<p>Last week I learned a ton using Claude Code to build my <a href="https://github.com/francisfuzz/emoji_gen">Rust CLI app Emojigen</a>.</p>

<p>Inspired by Oxide Computer Company’s <a href="https://oxide.computer/blog/why-rust">“Why Does Oxide Use Rust?”</a> blog post, I created this project to pick up Rust again. I planned the build with Claude and chose Rust as the base programming language for this command-line interface.</p>

<h2 id="getting-started">Getting Started</h2>

<p>I had some free hours last weekend and checked in throughout the week. I learned how to use Claude more effectively and optimize my token usage. I learned about <a href="https://doc.rust-lang.org/cargo/">Cargo</a> as a build system, setting up <a href="https://github.com/francisfuzz/emoji_gen/blob/main/.github/workflows/rust.yml">GitHub Actions workflows</a> for CI/CD with Rust projects, and using the <a href="https://crates.io/crates/emojis">emojis package</a> for random emoji selection.</p>

<p>The last time I touched Rust was a few years ago during a two-day O’Reilly course presented by my co-worker Nathan Stocks. I hadn’t picked it up since, but with the momentum in AI agentic workflows, I thought: why not build something with a language that has speed and tailwind advantage?</p>

<h2 id="planning-and-ideation">Planning and Ideation</h2>

<p>I started by planning the build with Claude. Since I was on-the-go, I used Claude Chat to outline what I wanted, then compacted that context for Claude Code to execute. I had the scaffold ready from my mobile ideation sessions. When I finally sat down at my computer, putting things together was fun.</p>

<p>I looked at setting up a Rust project from scratch, finding appropriate GitHub Actions workflow templates for building and testing, then built my first 0.1 version. Originally I planned to follow a Hello World tutorial precisely, but I decided learning on the fly would be more interesting and engaging than my usual setup process.</p>

<h2 id="development-journey">Development Journey</h2>

<p>After finding the emoji package and <a href="https://crates.io/crates/clap">Clap</a> for argument parsing, I wanted to run this project not just locally on macOS but also in a Docker container. I remembered that at work, many of our Docker containers were already configured for GitHub Codespaces, but thinking through the setup myself was really valuable. I relearned Docker images, local installation, and how to minimize dependencies and cache them for future builds.</p>

<p>When I pivoted to GitHub Actions, I was pleased to discover pre-built workflows for building, formatting, linting, and testing Rust code. I also learned that unlike Go, TypeScript, and Ruby—where tests are written separately from implementation files—Rust allows writing unit tests right underneath the implementation code. This was interesting and saved me one cognitive cycle from finding where tests should go.</p>

<p>I eventually decided to implement <a href="https://github.com/francisfuzz/emoji_gen/releases">releases</a> and <a href="https://github.com/francisfuzz/emoji_gen/blob/main/.github/workflows/audit.yml">security checks</a>, which I found really interesting. More on that later!</p>

<h2 id="testing-and-development">Testing and Development</h2>

<p>Writing <a href="https://github.com/francisfuzz/emoji_gen/blob/main/src/main.rs#L46-L72">unit tests in Rust</a> taught me about the different annotations available. I thought about what I wanted to test and how, and having Claude as a thinking partner to bounce ideas off was great.</p>

<p>One thing that impressed me was how Claude wasn’t just my thinking partner during ideation but throughout the entire project. It helped me inside issues, pull requests, and commits as I reviewed my own work. Claude wasn’t plugged into just one part of the process but throughout the whole workflow.</p>

<p>It’s amazing that something I’d used in the UI and as a base model for GitHub Copilot was also ubiquitous across different interfaces: my command line, Claude Code, my VSCode editor, and even GitHub Actions workflow updates via pull request comments. While I’ve used GitHub Copilot at work, it was interesting to see a different perspective in how Claude analyzed and processed my work.</p>

<h2 id="token-management-and-optimization">Token Management and Optimization</h2>

<p>One incredibly illuminating aspect was token usage. At work I don’t think about quotas—I’m grateful to issue commands at will—but having to manage context and be extra careful about token usage, I quickly discovered sub-agents, instructions, commands, and skills as ways to streamline my workflow and reduce typing (especially since I use Wispr Flow on my personal machine).</p>

<p>Codifying my work was really helpful, whether through a skill just for conventional commits or the Git workflow.</p>

<p>I found the <a href="https://github.com/anthropics/anthropic-skills/tree/main/examples/skill-creator">skill creator</a> that Brady provided at Anthropic’s Skills Repository particularly neat, along with the release process of pushing a tag and triggering a release build. You can check out all the skills I created here: https://github.com/francisfuzz/emoji_gen/tree/main/.claude/skills</p>

<p>With regard to optimizing my Claude Code usage, I found a couple of things worked best:</p>

<ol>
  <li>While it was neat that Claude could follow both conventional commands and my Git workflow to the letter, I noticed how many tokens it used writing text, pushing changes, checking Git diffs, and pulling pull request information.</li>
  <li>I chose to use the exclamation mark (bang operator) to carefully execute the commands I wanted, especially for version control and checking pull request status. I saved the thinking-heavy steps for Claude, like debugging compilation or build errors and collaborating on test design.</li>
</ol>

<h2 id="reflection">Reflection</h2>

<p>Something that struck me about this whole experience was the element of play. It’s been a while since I’ve executed an idea without a timeline or pressing need. I was more interested in ideating, trying, and seeing what Claude and I would come up with along the way.</p>

<p>While I don’t plan to develop the application further since I’ve already implemented what I wanted, I found the journey extremely illuminating. Whether relearning my own tools or thinking about how to interact with agents not just as tools but as partners in the process, <code class="language-plaintext highlighter-rouge">/compact</code> has become my new best friend. The learnings here are cross-transferable between my work at GitHub and my personal projects.</p>

<h2 id="credits">Credits</h2>

<p>I need to credit these authors whose work helped me see things differently and try new approaches:</p>

<ul>
  <li>https://steipete.me/posts/just-talk-to-it</li>
  <li>https://jonmagic.com/posts/how-i-work-2025-edition/</li>
  <li>https://simonwillison.net/2025/Dec/10/html-tools/</li>
</ul>

<h2 id="closing-thoughts">Closing thoughts</h2>

<p>I asked Claude what it thought about my experience, specifically:</p>

<blockquote>
  <p>Going through this copy what’s something that you spotted but I did not mention that I think would be a good takeaway moving forward?</p>
</blockquote>

<p>Here’s the compacted summary:</p>

<p><strong>Managing collaboration bandwidth, not just token budgets.</strong> Through building Emojigen, I developed a practical framework for human-AI collaboration by tiering work across three levels:</p>

<ul>
  <li><strong>High-value tasks</strong> → Let Claude think (debugging, test design, architecture)</li>
  <li><strong>Repetitive tasks</strong> → Use bang operator for direct execution (git ops, status checks)</li>
  <li><strong>Reusable patterns</strong> → Codify into skills (conventional commits, workflows)</li>
</ul>

<p>Token constraints became a forcing function to think critically about where AI adds the most value. This delegation triage—deciding what’s worth the “conversation cost”—is transferable beyond Claude Code. As AI tools proliferate across workflows, the real skill isn’t just using them, but developing intuition about which tool, at which stage, and at what level of autonomy.</p>

<p>Until soon, happy building! &lt;3</p>]]></content><author><name>Francis Batac</name></author><category term="workflow" /><summary type="html"><![CDATA[What happens when you combine Rust, Claude Code, and free weekend hours? I built Emojigen and rediscovered the joy of coding without deadlines while learning about AI-assisted workflows.]]></summary></entry><entry><title type="html">10 Lessons from a Decade at GitHub</title><link href="https://francisfuzz.com/posts/2025/12/27/a-decade-at-github/" rel="alternate" type="text/html" title="10 Lessons from a Decade at GitHub" /><published>2025-12-27T00:00:00+00:00</published><updated>2025-12-27T00:00:00+00:00</updated><id>https://francisfuzz.com/posts/2025/12/27/a-decade-at-github</id><content type="html" xml:base="https://francisfuzz.com/posts/2025/12/27/a-decade-at-github/"><![CDATA[<p>Ten Years at GitHub: 10 Lessons Learned.</p>

<p>What’s your weapon of choice as a developer? When GitHub’s CEO asked me that during onboarding in 2015, I said “Unix” to sound cool.</p>

<p>Ten years later, I wish I’d said “asking great questions”—because that single skill shaped everything that followed across four different roles at GitHub.</p>

<p>Here are 10 lessons from a decade of serving others at GitHub.</p>

<hr />

<h2 id="1-asking-great-questions">1. Asking great questions</h2>

<p>During my first year as a support engineer, I’d read a customer’s question, interpret it one way, and send a reply—only to have them come back saying I’d completely misunderstood.</p>

<p>One of my mentors taught me a technique that changed everything. Instead of jumping to solutions, he’d ask: “If you had a magic wand, what would you expect to see? And why would that be important to your workflow?”</p>

<p>This question was gold. While most of our tickets were about GitHub’s API and integrations, this simple prompt revealed <em>how</em> people were actually using the platform and <em>where</em> the interface mattered most to them. That context helped me course-correct my replies and connect customers with the right resources and people.</p>

<p>The difference between a great question and a regular one? Context!</p>

<p><strong>Great questions are specific and get to the heart of what matters to your audience.</strong></p>

<p>As I moved from support to program management to partner engineering to product engineering, my questions evolved—adapting to different audiences (customers, stakeholders, partners, cross-functional teams) while becoming more refined as I gained context about what each role needed.</p>

<h2 id="2-finding-a-mentor-early-on-can-make-or-break-the-experience">2. Finding a mentor early on can make or break the experience</h2>

<p><a href="http://github.com/izuzak">Ivan Zuzak (@izuzak)</a> is a staff software engineer at GitHub. When we first met, we worked in support and his deep technical chops and distinctive writing style that caught my attention. I was drawn to how he approached helping people—especially around APIs and integrations, the area I found most fascinating from my software engineering background.</p>

<p>I asked my manager how to approach him, and over the next 3½ years, we worked together answering every question under the sun about GitHub’s REST API, GraphQL API, OAuth apps, GitHub Apps, webhooks—anything with programmatic support.</p>

<p>What made Ivan exceptional wasn’t just his technical depth. He identified my superpower early: finding the right people to tackle a problem and building shared understanding across teams. He sponsored me by giving me an opportunity to present at an internal summit on the integrator experience, speaking alongside leaders from engineering, product, and design. That moment let me formalize what I did, how I did it, and why it mattered. It expanded my trajectory from support engineer to product engineer.</p>

<p>Later, when GitHub Actions launched, I became the internal <a href="https://en.wikipedia.org/wiki/Ombudsman">ombudsperson</a> for the feature, leveling up my team and empowering customers. Watching my own progression made me realize something critical: mentorship creates a debt you pay forward. I started seeking out people without mentors, trying to offer what Ivan had given me.</p>

<p>Years later, after several internal pivots, Ivan and I still check in every few months. We work in the same engineering organization now. I still look up to him, and I’m certain I wouldn’t be the same person without his guidance.</p>

<h2 id="3-once-youve-learned-enough-about-something-package-what-youve-learned-and-present-it">3. Once you’ve learned enough about something, package what you’ve learned and present it</h2>

<p>Early on, GitHub introduced the <a href="https://docs.github.com/en/rest/checks?apiVersion=2022-11-28">Checks API</a>—a richer interface for integrators to report build statuses beyond the simple pending/success/error states of the old Status API. I was intimidated. It wasn’t as intuitive as repositories or pull requests.</p>

<p>I spent time with the lead engineer, <a href="https://keavy.com/">@keavy</a>, asking basic questions the documentation didn’t answer. What does this feature do? Why does it exist? I extracted my notes into a Markdown document (this was before AI tools), created a slide deck, and hosted a lunch-and-learn for support engineers. Today, I’d issue multiple pull requests to the documentation or source while working on a Loom to contextualize it for humans in the future (yay distributed workforce!).</p>

<p>Then I invited the partner engineering team. They worked directly with integrators who deeply integrated with GitHub’s systems—high-touch, high-value relationships. Suddenly, packaging my knowledge wasn’t just helping my team on the front lines; it was helping partner engineers be more effective in enabling integrators to succeed.</p>

<p>The act of slowing down and explaining what I’d learned—to myself and others—crystallized my understanding. Follow-up questions filled gaps I didn’t know existed. I realized that even small contributions—a typo fix, a documented question—could improve the next support or engineering interaction.</p>

<p>This practice evolved across my roles. Today, in the age of AI tools, I use Copilot and Claude as thinking partners to analyze what I’ve written and help me package knowledge into reusable wisdom that applies across contexts.</p>

<h2 id="4-dont-just-teach-your-teamfind-others-who-use-the-same-technologies-and-equip-them-too">4. Don’t just teach your team—find others who use the same technologies and equip them too</h2>

<p>When I became one of the internal <a href="https://en.wikipedia.org/wiki/Ombudsman">ombudspersons</a> for <a href="https://github.com/features/actions">GitHub Actions</a>, I published our internal support documentation in a location where product engineers could see what we were working on and what questions we were getting. Other teams working with integrators and customers building on GitHub Actions could use it too.</p>

<p>This was a turning point. I realized I enjoyed being in the weeds—and sometimes climbing up to see the forest. That forest-level view came from pausing to reflect: What’s the core issue or lesson here? Who else could benefit from what I’ve learned?</p>

<p>Expanding beyond my immediate team wasn’t just about scale—it was about enriching others’ work and careers with lessons from my mistakes and discoveries.</p>

<h2 id="5-log-what-gives-you-zest-and-find-out-what-other-adjacent-activities-do-that-too">5. Log what gives you zest and find out what other adjacent activities do that too</h2>

<p>Early on, I found the thrill in internal research—writing up notes on how things work, why they work, and what to consider. I also loved teaching as a way to crystallize my understanding.</p>

<p>Combining research and teaching helped me pivot into roles like program management, giving me frameworks for understanding domains more deeply. Learning and relearning fundamentals became a pattern. Taking checkpoints—going from understanding how something works to actually practicing and using it—helped me expand into different engineering domains.</p>

<p>When I transitioned back to product engineering as one of the leads for optimizing GitHub Docs’ localization builds, I ramped up on modern JavaScript and build systems across vendors. I wasn’t building pipelines exclusively, but translating requirements across systems taught me how to communicate well asynchronously over text without losing signal.</p>

<p>The through-line? I logged what energized me and looked for adjacent activities that gave me the same feeling.</p>

<h2 id="6-grow-where-youre-planted">6. Grow where you’re planted</h2>

<p>I worked as a program manager for the <a href="https://github.com/community/community">GitHub Community</a> for almost a year. I missed the thrill of research, understanding how things work, and communicating directly with people—whether one-on-one or one-to-many.</p>

<p>When a partner engineer role opened up, I applied. It had crossover traits from my support engineering days: prototyping, communicating with integrators about how to build with our platform. But I had a few weeks before I could transition, and I needed to leave my PM work in good shape for whoever took over.</p>

<p>Instead of coasting, I used that time to learn something critical: how to scope work well and build partnerships. As a PM without direct reports or budget, I couldn’t influence metrics alone. I needed to find work that aligned with other teams’ goals—work that helped both of us move forward.</p>

<p>The PM year was hard; I wasn’t in the code. I wasn’t face-to-face with people the way I wanted to be. But that constraint forced me to think about my work more holistically, systematically, and cross-functionally.</p>

<p>Later, when I became a product and engineering lead, that lesson paid off. I could scope quarter-long epics into essential batches, spike out the most meaningful pieces of work, and build the partnerships needed to ship. The PM experience—despite not energizing me—taught me skills I use every day.</p>

<h2 id="7-technical-spikes-help-you-discover-if-youd-like-to-do-more-of-the-worknot-just-where-the-work-is-going">7. Technical spikes help you discover if you’d like to do more of the work—not just where the work is going</h2>

<p>A technical spike is a time-boxed exercise to answer a few small questions and gain clarity about what needs to change in a system. There’s no pressure to ship something usable on the first try—what matters is understanding how the system works and what its dependencies are before making changes.</p>

<p>On the new user experience team, I was tasked with updating the dashboard’s Copilot interface with targeted icebreakers for users who’d joined GitHub in the last month. The component was built in Rails and <a href="https://github.com/github/catalyst">Catalyst</a>. The question: should we migrate to React to ship these new icebreakers?</p>

<p>I started my spike by outlining the steps for a React migration. Then I discovered the icebreaker configurations were stored in a format that made them easy to change. The spike revealed we didn’t need to migrate everything right now—and documenting the migration path would help the next person who picked it up.</p>

<p>With shifting planning cycles, time was tight. I found a way to update the configuration and feature-flag the changes, shipping within a week instead of a month. That moment clarified something: I care about outcomes and impact. As much as I love refactoring code, shipping faster let us learn how people were using the icebreakers and where we needed to drive higher signal for users wanting to make the most of GitHub and Copilot.</p>

<p>My advice: Time-box a question or set of questions and dig in to learn how things work. While AI tools are great thinking partners, working with people in the field shows you what brings them joy and what toils them. That insight is invaluable for deciding whether you want more of that work in your career.</p>

<h2 id="8-ai-agents-need-to-understand-system-wide-context-before-actingresearch-and-discovery-matter-more-than-ever">8. AI agents need to understand system-wide context before acting—research and discovery matter more than ever</h2>

<p>Over the last four years as a product engineer, I’ve watched AI integrate into every phase of the software development lifecycle—whether in VS Code or elsewhere. Planning, asking, editing, having agents work on our behalf—it’s just the start.</p>

<p>But here’s the friction I’m experiencing: <a href="https://www.youtube.com/watch?v=rmvDxxNubIg">agents have limited context windows</a>. Being able to narrow in on what context is most relevant for agents to operate on is the name of the game right now.</p>

<p>Before planning, the research and discovery I’ve described in earlier lessons becomes critical. Understanding how things work independently, how they work with other systems, and what behaviors emerge from those interactions—that’s the foundation agents need.</p>

<p>I want to see agents understand how one change in one part of the system affects the entire system and specific subsystems, even before writing a line of code. Just as we can build abstract syntax trees or create deterministic views of how symbols link together in code search, I’d love to see this kind of system-wide understanding reflected in agents as I work.</p>

<p>Beyond having agents move on our behalf, I think observing changes and iterating based on telemetry would be invaluable. After you’ve planned, implemented, and assessed work, you can codify it—update your configuration so the linter catches it next time, or ensure tests prevent regressions.</p>

<p>But there’s more: agents could analyze how we do this work. If we forgot something during planning, that gap could be codified into the planning or discovery process for next time. Having a focused and accurate memory for where things were, where they are, and where they’re going is just as important as the context window itself.</p>

<h2 id="9-its-not-shipped-until-you-write-about-ituse-story-to-codify-why-changes-matter">9. It’s not shipped until you write about it—use story to codify why changes matter</h2>

<p>One lesson I’ve carried throughout my time at GitHub: <a href="https://ben.balter.com/2015/07/20/write-corporate-blog-posts-as-a-human/#1-its-not-shipped-until-you-blog-about-it">it’s not shipped until you write about it</a>.</p>

<p>Eventually, all the features I’ve worked on that get published have a blog post about them. The narrative explains not just what changed, but why it mattered. Story is how we encode memory—where things were, where they are, where they’re going.</p>

<p>This blog post itself is proof: I’m using narrative to make sense of a decade.</p>

<p>I wonder: what would it look like for agents to capture this in their memory as they ship work? To tell a story about the work and why it was important? Like my daughter, I find stories captivating. How can we use story as a way to convey why a particular changeset matters and incorporate that into the software development lifecycle?</p>

<p>In some ways, the narrative arc could become our specification. Before we write code, we write the story of what we’re building and why it matters to users. That story becomes the North Star for implementation, testing, and ultimately the blog post that announces the feature.</p>

<p>Story isn’t just documentation—it’s how we make meaning from our work and share that meaning with others.</p>

<h2 id="10-the-questions-you-ask-shape-the-person-you-become">10. The questions you ask shape the person you become</h2>

<p>Looking back, I wish I’d told my CEO my weapon of choice was asking great questions—but it took ten years to understand why.</p>

<p>Every lesson here circles back to inquiry: asking customers about their workflows, mentors for guidance, myself what energizes me, systems how they work, agents what context they need, and always asking “why does this matter?”</p>

<p>The through-line isn’t just technical growth. It’s curiosity as a practice. I didn’t just learn technologies—I learned to ask increasingly sophisticated questions that revealed what I cared about and where I wanted to go.</p>

<p>In the age of AI agents, I’m still asking questions. How do we give agents the context they need? How do we codify learning so it persists? How do we use story to capture why work matters?</p>

<p>Ten years in, I’m grateful for the questions I’ve asked and the people who’ve helped me ask better ones. <code class="language-plaintext highlighter-rouge">&lt;3</code></p>

<h3 id="special-thanks-">Special Thanks ✨</h3>

<p>First, all glory to God. To my wife Angelica and daughter Ava, and my parents who’ve supported me since the beginning.</p>

<p>To <a href="https://github.com/izuzak">Ivan Zuzak</a> for being an exceptional mentor and showing me what great technical leadership looks like.</p>

<p>To the managers who believed in me, sponsored my transitions, and invested in my growth as a person—thank you for helping me become who I am today.</p>

<p>To the New User Experience team for teaching me what it means to build with empathy and impact.</p>

<p>And to the support, partner engineering, program management, and product engineering teams who taught me something new every day—thank you for ten incredible years of learning, building, and asking better questions together.</p>

<hr />

<p><em>This post was shaped in partnership with Claude, an AI assistant that helped me articulate these lessons and organize my thoughts. The act of explaining my decade to Claude became its own form of asking questions—which feels fitting.</em></p>]]></content><author><name>Francis Batac</name></author><category term="career" /><category term="GitHub" /><category term="mentorship" /><category term="software-engineering" /><category term="AI-agents" /><category term="career-growth" /><category term="technical-leadership" /><summary type="html"><![CDATA[Ten years across support engineering, program management, partner engineering, and product engineering—lessons on asking great questions, mentorship, technical spikes, and building with AI agents]]></summary></entry><entry><title type="html">How @francisfuzz approaches code review</title><link href="https://francisfuzz.com/posts/2025/06/02/how-i-approach-code-review/" rel="alternate" type="text/html" title="How @francisfuzz approaches code review" /><published>2025-06-02T00:00:00+00:00</published><updated>2025-06-02T00:00:00+00:00</updated><id>https://francisfuzz.com/posts/2025/06/02/how-i-approach-code-review</id><content type="html" xml:base="https://francisfuzz.com/posts/2025/06/02/how-i-approach-code-review/"><![CDATA[<p>I’ll be the first to confess: I love reviewing code. It’s one of my favorite things to do as a developer.</p>

<p>Code review is where software quality lives or dies. After a decade of reviewing thousands of pull requests across GitHub, startups, and consulting engagements, I’ve learned that great code review isn’t about catching bugs—it’s about building confidence in what ships to production.</p>

<h2 id="why-this-matters">Why this matters</h2>

<p>Every line of code that reaches your customers passes through code review. The difference between shipping with confidence and shipping with crossed fingers often comes down to how systematically your team approaches this critical gate. Poor code review processes create bottlenecks, missed issues, and team friction. Effective code review accelerates delivery while maintaining quality.</p>

<p>I’ve refined this approach across three distinct roles: as a senior engineer on GitHub’s New User Experience team, as a consultant for early-stage startups, and as a senior support engineer helping teams optimize their pull request workflows. This perspective spans customer-facing features, internal tooling, and developer experience improvements.</p>

<h2 id="the-three-phase-framework">The three-phase framework</h2>

<p>Effective code review follows a deliberate sequence: <strong>gather context, analyze changes, validate impact</strong>. Most reviewers jump straight to the code diff, missing critical context that determines whether changes align with business objectives and technical strategy.</p>

<h3 id="phase-1-context-before-code">Phase 1: Context before code</h3>

<p>Before examining a single line of code, I establish the strategic foundation. This prevents costly misalignment and ensures changes serve their intended purpose.</p>

<p><strong>Business context questions:</strong></p>
<ul>
  <li>Who initiated this work and why now?</li>
  <li>Which users or systems will this impact?</li>
  <li>What problem does this solve or opportunity does it capture?</li>
  <li>How does this align with current technical and product priorities?</li>
</ul>

<p><strong>Technical context questions:</strong></p>
<ul>
  <li>What’s the blast radius if this goes wrong?</li>
  <li>Are there dependencies or sequencing requirements?</li>
  <li>Does this require coordination with other teams?</li>
  <li>What validation environments are available?</li>
</ul>

<p>This context-first approach has prevented numerous issues that would have been expensive to catch in production. When I understand the “why” behind changes, I can evaluate whether the “how” achieves the intended outcome.</p>

<h3 id="phase-2-systematic-code-analysis">Phase 2: Systematic code analysis</h3>

<p>With context established, I examine the technical implementation through multiple lenses:</p>

<p><strong>Scope and structure:</strong></p>
<ul>
  <li>Do all changes belong in this pull request?</li>
  <li>Is the change size appropriate for safe deployment?</li>
  <li>How do the modified files relate to each other?</li>
  <li>Are there unrelated improvements that should be separated?</li>
</ul>

<p><strong>Quality and maintainability:</strong></p>
<ul>
  <li>Does the code follow established patterns and conventions?</li>
  <li>Are there performance, security, or accessibility implications?</li>
  <li>Is the implementation clear enough for future maintainers?</li>
  <li>Are appropriate tests included?</li>
</ul>

<p>I use a consistent commenting system to set clear expectations:</p>

<ul>
  <li><strong>💅 Non-blocking suggestion</strong>: Style improvements that don’t block approval</li>
  <li><strong>🙋 Question</strong>: Clarifications needed to understand intent or approach</li>
  <li><strong>🟡 Requested change</strong>: Issues that must be addressed before shipping</li>
  <li><strong>✨ Affirmation</strong>: Recognition of particularly well-executed solutions</li>
</ul>

<p>This taxonomy helps authors prioritize their response and understand what blocks progress versus what offers optional improvement.</p>

<h3 id="phase-3-validation-and-verification">Phase 3: Validation and verification</h3>

<p>The final phase ensures changes work as intended across relevant environments and edge cases.</p>

<p><strong>Validation checklist:</strong></p>
<ul>
  <li>Can I reproduce the original issue and verify the fix?</li>
  <li>Do staging or review environments demonstrate the changes correctly?</li>
  <li>Are there clear instructions for testing the changes?</li>
  <li>Have automated tests validated the expected behavior?</li>
</ul>

<p>This systematic validation has caught numerous issues that passed automated testing but failed in realistic usage scenarios.</p>

<h2 id="implementation-guidelines-for-teams">Implementation guidelines for teams</h2>

<p><strong>For engineering managers:</strong></p>
<ul>
  <li>Establish clear expectations around review turnaround times</li>
  <li>Ensure your team has adequate staging environments for validation</li>
  <li>Track review quality metrics alongside velocity metrics</li>
  <li>Invest in tooling that surfaces context automatically (linking to issues, ADRs, deployment history)</li>
</ul>

<p><strong>For individual contributors:</strong></p>
<ul>
  <li>Structure pull requests to tell a clear story from business need to technical solution</li>
  <li>Include comprehensive testing instructions in your descriptions</li>
  <li>Respond to review feedback systematically, addressing each comment explicitly</li>
  <li>Use review as a teaching opportunity, explaining your technical decisions</li>
</ul>

<p><strong>For senior engineers:</strong></p>
<ul>
  <li>Model thorough review practices, especially for junior team members</li>
  <li>Balance perfectionism with shipping velocity—not every suggestion needs to block progress</li>
  <li>Use review as mentorship opportunities to elevate team capabilities</li>
  <li>Document patterns and anti-patterns you see repeatedly</li>
</ul>

<h2 id="measuring-success">Measuring success</h2>

<p>Effective code review creates measurable improvements:</p>
<ul>
  <li>Reduced production incidents from preventable issues</li>
  <li>Faster onboarding for new team members through consistent code patterns</li>
  <li>Increased deployment confidence leading to more frequent releases</li>
  <li>Better knowledge sharing across the team</li>
</ul>

<p>Teams that invest in systematic code review processes ship faster and more reliably. The upfront time investment in thorough review pays dividends in reduced debugging, support burden, and technical debt.</p>

<h2 id="looking-ahead">Looking ahead</h2>

<p>Code review continues evolving with AI assistance, but the fundamental principles remain: understand context, analyze systematically, validate thoroughly. Tools can surface issues and suggest improvements, but human judgment about business alignment, user impact, and system architecture remains irreplaceable.</p>

<p>The best code review isn’t about perfection—it’s about confidence. When your team consistently applies these practices, you ship knowing your changes will work as intended, scale appropriately, and maintain the codebase quality that enables sustainable growth.</p>

<h3 id="special-thanks">Special thanks</h3>

<table>
  <tbody>
    <tr>
      <td>(Edited to add</td>
      <td>June 5, 2025): thank you to my colleagues, be it in our conversations or in editorial review. Without you, this post would not be possible!</td>
    </tr>
  </tbody>
</table>

<ul>
  <li><a href="https://github.com/andimiya">Andrea Takamiya</a></li>
  <li><a href="https://github.com/smashwilson">Ash Wilson</a></li>
  <li><a href="https://github.com/hkly">Hannah Yiu</a></li>
  <li><a href="https://github.com/juliekang">Julie Kang</a></li>
  <li><a href="https://github.com/cheshire137">Sarah Vessels</a></li>
  <li><a href="https://github.com/catsintents">Selene B. Pastelski</a></li>
</ul>]]></content><author><name>Francis Batac</name></author><category term="craft" /><summary type="html"><![CDATA[Supercharging shipping confidence]]></summary></entry><entry><title type="html">Web Summit Lisbon 2023</title><link href="https://francisfuzz.com/posts/2023/11/21/web-summit-lisbon-2023/" rel="alternate" type="text/html" title="Web Summit Lisbon 2023" /><published>2023-11-21T00:00:00+00:00</published><updated>2023-11-21T00:00:00+00:00</updated><id>https://francisfuzz.com/posts/2023/11/21/web-summit-lisbon-2023</id><content type="html" xml:base="https://francisfuzz.com/posts/2023/11/21/web-summit-lisbon-2023/"><![CDATA[<p>I first heard about Web Summit through a work colleague who attended the previous year. I followed the marketing emails for a while and applied for a Developer Ticket. After getting accepted, I did not know what to expect, but I knew that the best way to approach it was with an open mind. Attending this year’s Web Summit in Lisbon helped me gain a broader perspective on how people are developing technology outside of GitHub and Microsoft. I have shared my collective learnings below based on the three sessions that had the most profound impact on my experience.</p>

<p>I’d like to send a heartfelt thanks to the Web Summit team for extending the ticket. It’s been over three years since I attended a conference, and I am grateful to make my debut back with Web Summit Lisbon.</p>

<h2 id="pro-tips-for-pitching">Pro tips for pitching</h2>
<p>This session was hosted by Cristina Fonseca of <a href="https://www.indicocapital.com/">Indico Capital</a>. Each startup that presented gave me insight into how people are using AI in their capabilities, whether for developer tools, event planning, or even legal documentation.</p>

<p>What I found most interesting was the themes of questions that emerged when Cristina evaluated each of the pitches. Here’s my AI-assisted summary of themes and great questions to ask startups, depending on the context of their pitch:</p>

<p><strong>Product and Competitive Edge:</strong></p>
<ul>
  <li>Is the (digital) product a separate visual editor or an embedded widget?</li>
  <li>How do you maintain a competitive edge when these open-source models are accessible to everyone?</li>
  <li>What is your dependence on a third-party API to generate data or answers versus using your own proprietary technology?</li>
</ul>

<p><strong>Geographical Focus and Privacy:</strong></p>
<ul>
  <li>What is the current and future geographical focus for the short-term? How about the long-term?</li>
  <li>How do you ensure the privacy of students, especially minors, using the app?</li>
  <li>When the data is submitted, how is it stored and secured? What is your current compliance and security posture, and where can it be improved?</li>
</ul>

<p><strong>Business Model and Competition:</strong></p>
<ul>
  <li>Explain the business model: how do you make money?</li>
  <li>Share insights on competition in your market. If you think you don’t have any competitors, who or what do you anticipate being a risk to your business in the next year?</li>
  <li>Based on your slides, what is the base currency of your earnings?</li>
  <li>Is this a subscription-based model or something else?</li>
</ul>

<p><strong>Sales and Go-to-Market Strategy:</strong></p>
<ul>
  <li>How does selling differ between public and private schools?</li>
  <li>Detail your go-to-market strategy and target audience.</li>
  <li>Is this a business-to-business company? Or a business-to-consumer? Or a mix of something else?</li>
</ul>

<h2 id="who-are-you-how-the-convergence-of-microbes-human-dna-and-ai-can-alter-our-evolution">Who are you? How the convergence of microbes, human DNA, and AI can alter our evolution</h2>
<p>This session was hosted by Lauren Wright, CEO of <a href="https://thenaturalnipple.com/">The Natural Nipple</a>.</p>

<p>I initially discovered her work when she delivered her company’s <a href="https://www.youtube.com/watch?v=D7riTMgYfck">Web Summit 2023 Pitch</a>.</p>

<p>The conversation was rich, and while I couldn’t capture every point, I found Lauren’s discussion on the significance of prebiotic foods, the research behind the potency of breastmilk, and details in between fascinating.</p>

<p>One of the notable papers Lauren mentioned was <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10223744/"><code class="language-plaintext highlighter-rouge">Equally Good Neurological, Growth, and Health Outcomes up to 6 Years of Age in Moderately Preterm Infants Who Received Exclusive vs. Fortified Breast Milk—A Longitudinal Cohort Study</code></a>. One of the key findings that I garnered was the importance of the microbiome in early life, suggesting that shaping microbial DNA in the first two years is crucial for long-term health. The wonder of the nutrients that come from this form is that it is tailor-made for the child that no other substance or food product can offer. However, where breastfeeding just isn’t possible, she promotes donor milk as the next best option for infants and toddlers.</p>

<p>As far as AI went, Lauren described the power of using machine learning and existing tooling to comprehensively process and combine the collective research on the microbiome for short-term research and related product development. Having AI along with a development team and a group of medical professionals can be a powerhouse in interpreting microbiome data, in addition to improving our collective understanding on the topic. One of the final points Lauren made was addressing historical shifts in formula marketing and its potential correlation with increased inflammation.</p>

<h2 id="what-it-takes-to-be-a-technical-leader">What it takes to be a technical leader</h2>
<p>This session was moderated by Bobby Allyn of NPR hosting the following panelists: Christine Spang of Nylas, Simon Wistow of Fastly, and Emil Eifrem of Neo4j.</p>

<p>I appreciated that each of the panelists represented their companies’ respective “stage” of a startup and shared their perspectives from that angle: from seed and early round to a slightly larger company nearing one thousand employees, to a company that’s publicly traded.</p>

<p>Below is my paraphrased version of the questions Bobby asked along with the responses from each of the panelists.</p>

<blockquote>
  <p>How do you attract engineers from a bigger company like Facebook, Apple, Amazon, Netflix, or Google?</p>
</blockquote>

<ul>
  <li>From Simon: Pitching to a FAANG senior engineer — Professional: bigger fish in a smaller pond, outsized impact; direct effects on org // be a company that people are proud to be a part of. Think broader than the technology.</li>
  <li>From Emil: Impact and Culture. Relationship-centric culture and combining empathy with accountability.</li>
  <li>From Christine: Autonomy, 130 people — the amount of control over your working day and who you’re working with is there. Make sure upfront people are aware but a unique opportunity that they can work on internet-plumbing projects. Deep distributed system problems and building the foundation for the future.</li>
</ul>

<blockquote>
  <p>Based on your company’s current stage, what is the most important thing you all are focused on achieving?</p>
</blockquote>

<ul>
  <li>For a growth startup, a software engineer’s job transitions from solely writing code. There’s a breadth and depth of competencies spanning from communicating to translating. The keys to success remain the same no matter the activity: being detail-oriented and following through on those commitments. However, there may come a point where you transition from being an individual contributor, and you’re in a position where it’s more impactful to grow your team and having the team aligned on building up the business. In addition, you become an interface to work with other parties in the company and get things done together, which is one of the most difficult things to do, in contrast with the “coding” that you were originally hired to do.</li>
  <li>For a larger organization (many hundred of employees, let’s say), it comes down to the level of scope and abstraction based on the role. For example, as a technical leader, your original task was to build the product. As the product and its user base grow in size, there is a responsibility for the team to work together and for you to coordinate (or perhaps even better, orchestrate) their operation. You grow into the role of being an organizational architect to arrange and generate the most output from each unit. Your days will then be figuring out how to get your product managers working in tandem with Developer Relations, along with the Sales and Marketing teams.</li>
</ul>

<blockquote>
  <p>On AI: to what degree should this be core or complementary to your business?</p>
</blockquote>

<p>There’s a belief that in this environment, you have to start with AI being the core (and perhaps the only thing) driving your business. Taking a step back, what’s more critical is how it’s used, why, and for whom. Think about how your current offering can be enhanced or complemented with AI is better than just having everything be AI only.</p>

<blockquote>
  <p>What’s your advice for junior developers wanting to work at your startup?</p>
</blockquote>

<ul>
  <li>Open source experience is valuable</li>
  <li>“Eye of the tiger”: does this person have a sense of enthusiasm and a drive to push themselves? Do they care about the business and are they tuned in on the downstream impacts? Do they ask interesting questions tied to customers? Do they express curiosity that shows they know will be on the high slope of learning?</li>
  <li>Showcase the ability to think structurally, along with the ability to have a longer-term and broader perspective of the impact of what you’re building</li>
</ul>

<h2 id="closing-thoughts">Closing thoughts</h2>
<p>Attending Web Summit 2023 gave me new insights that I wouldn’t have otherwise watching a stream or a recording. Meeting people in-between sessions over coffee, along with letting the ideas from session to session connect in an organic way gave me a newfound perspective on how what I work on day-to-day affects those who are advancing their fields and serving their customers that wouldn’t have otherwise been touched.</p>

<p>I look forward to returning next year and I will be applying as a speaker. Stay tuned! 😉</p>]]></content><author><name>Francis Batac</name></author><category term="learning" /><summary type="html"><![CDATA[Pro pitching tips, the microbiome's role in evolution, and what it takes to be a tech leader]]></summary></entry><entry><title type="html">Building Psychological Safety In Code Reviews</title><link href="https://francisfuzz.com/posts/2023/07/21/building-psychological-safety-in-code-reviews/" rel="alternate" type="text/html" title="Building Psychological Safety In Code Reviews" /><published>2023-07-21T00:00:00+00:00</published><updated>2023-07-21T00:00:00+00:00</updated><id>https://francisfuzz.com/posts/2023/07/21/building-psychological-safety-in-code-reviews</id><content type="html" xml:base="https://francisfuzz.com/posts/2023/07/21/building-psychological-safety-in-code-reviews/"><![CDATA[<p>GitHub hosts a monthly Day of Learning where employees are welcome to take the time they beed to learn about something that’s related to their professional development.</p>

<p>There is a dedicated learning and development team that sources speakers and this month, I presented a talk about building psychological safety in the context of code reviews.</p>

<p>I first came across <code class="language-plaintext highlighter-rouge">psychological safety</code> while considering the path of becoming a manager years ago. Specifically, <a href="https://officevibe.com/blog/build-psychological-safety">OfficeVibe’s Psychological Safety: the key to high-performing teams</a> article gave me a high-level primer and it has become one of my core working practices as a technology professional.</p>

<p>The following post is an abbreviated version of that presentation. Before sharing the content, I’d like to share some healthy “disclosures”, if you will, so we’re all on the same page.</p>

<p>First, I am a software engineer by profession. I’m neither an anthropologist, psychologist, nor a psychiatrist; forgive me if I haven’t taken into account specific cultural or psychological contexts in this article. What’s shared here is my primitive understanding of how psychological safety <em>could</em> work in the context of code reviews based on my observations, experiences, and combined understanding of existing resources. It is by no means the definitive guide on how every software engineering team in the world should carry out this work.</p>

<p>Next, this content is for any professional that requires writing and reviewing code as a part of their day-to-day responsibilities, along with those who manage professionals who do those things. I understand that along with software engineers, technical program managers, data scientists, analysts, and even product managers within GitHub write code as a part of their work. This is my best attempt at being inclusive and to say that, if you code within a team setting, this is for you!</p>

<p>Last, I’m giving this my best and I know that I’m fallible. You’re welcome to <a href="https://github.com/francisfuzz/dotcom/pulls">open a pull request</a> for any suggested changes.</p>

<h2 id="what-is-psychological-safety">What is psychological safety?</h2>

<p>Let’s start with definitions. Amy Edmondson is the Novartist Professor of Leadership and Management at Harvard Business School and is known for her work with teams. In her TedX HGSE talk <a href="https://www.youtube.com/watch?v=LhoLuui9gX8">Building a psychologically safe workplace</a> she defines psychological safety as:</p>

<blockquote>
  <p>a belief that one will not be punished or humiliated for speaking up with ideas, questions, concerns, or mistakes.</p>
</blockquote>

<p>I recognize that the words <code class="language-plaintext highlighter-rouge">punished</code> and <code class="language-plaintext highlighter-rouge">humiliated</code> may convey a negative sensitivity for some readers. As I engaged with this definition, one of my mentors shared that psychological safety can be impacted by actions and sentiments less severe. By acknowledging that there’s really a spectrum, one way we can widen the scope of this definition is to offer an alternative, definition:</p>

<blockquote>
  <p>a belief that one will not be excluded or made inferior for speaking up with ideas, questions, concerns, or mistakes.</p>
</blockquote>

<p>Whenever I see terms defined in the negative, I challenge myself by taking a stab at defining it in the affirmative. Here’s the best I could come up with:</p>

<blockquote>
  <p>a belief that one will be recognized and affirmed for speaking up with ideas, questions, concerns, or mistakes.</p>
</blockquote>

<p>Depending on the context and cases I walk through, I’ve fluidly shifted between these three definitions in my day-to-day work life.</p>

<p>Moving from this common understanding, let’s talk about why it matters.</p>

<h2 id="why-does-it-matter">Why does it matter?</h2>

<p>Amy Edmondson shares this reason in the aforementioned presentation:</p>

<blockquote>
  <p>Every time we withhold, we rob ourselves and our colleagues of small moments of learning, and we don’t innovate.</p>
</blockquote>

<p>At GitHub, one of our leadership principles is “trust by default.” In my view, psychological safety is foundational to building trust in any of our working relationships. Without trust, it’s pretty tough to do anything else, whether we’re communicating with each other amidst conflict, building a new feature, debugging a tricky bug, or exercising the courage to challenge a controversial decision. Every meaningful piece of work comes from the bedrock of operating under high trust.</p>

<p>Google published their study on the <a href="https://rework.withgoogle.com/blog/five-keys-to-a-successful-google-team/">five keys to a successful Google team</a>, specifically what makes a Google team effective. Their research found that “Who is on a team matters less than how the team members interact, structure their work, and view their contributions.” Of the five key dynamics that set successful teams apart from other teams at Google, psychological safety was the most important. Here are those dynamics and a question to frame them in the intended context:</p>

<ol>
  <li>Psychological safety: Can we take risks on this team without feeling insecure or embarrassed?</li>
  <li>Dependability: Can we count on each other to do high quality work on time?</li>
  <li>Structure &amp; clarity: Are goals, roles, and execution plans on our team clear?</li>
  <li>Meaning of work: Are we working on something that is personally important for each of us?</li>
  <li>Impact of work: Do we fundamentally believe that the work we’re doing matters?</li>
</ol>

<p>McKinsey and Company studied psychological safety in the context of leadership development, describing it as <a href="https://www.mckinsey.com/capabilities/people-and-organizational-performance/our-insights/psychological-safety-and-the-critical-role-of-leadership-development">a critical role</a>. What they shared is that the most important driver of a team’s psychological safety is cultivating a positive team climate. The entire article is worth a read, but I found this particular finding illuminating:</p>

<blockquote>
  <p>By setting the tone for the team climate through their own actions, team leaders have the strongest influence on a team’s psychological safety. Moreover, creating a positive team climate can pay additional dividends during a time of disruption.</p>
</blockquote>

<blockquote>
  <p>Our research finds that a positive team climate has a stronger effect on psychological safety in teams that experienced a greater degree of change in working remotely than in those that experienced less change during the COVID-19 pandemic.</p>
</blockquote>

<blockquote>
  <p>Yet just <strong>43 percent</strong> of all respondents report a positive climate within their team.</p>
</blockquote>

<p>(The emphasis on that particular statistic is my own. 😉)</p>

<p>Building psychological safety is for teams, where each member believes that they won’t be punished, humiliated, excluded, or made inferior in the face of raising a question, concern, mistake, or new idea. It’s important because it’s one of the ways to build trust, which is the foundation of highly effective teams.</p>

<p>As professionals that write code day-to-day, what are some ways of bringing this into our code review practice?</p>

<h2 id="how-do-we-build-psychological-safety">How do we build psychological safety?</h2>

<p><a href="https://survivethrive.win/blog/f/psychological-safety">Professor Edmondson offers SAFE as an acronym</a>, where SAFE stands for:</p>

<ul>
  <li>Setting Limits</li>
  <li>Approachability</li>
  <li>Fallibility</li>
  <li>Engagement</li>
</ul>

<h3 id="setting-limits">Setting Limits</h3>

<p>Limits (read: expectations) give team members confidence to know what the boundaries are: anything that is off-limits or out-of-bounds. When these are known, they can take the time to experiment and try new things within those boundaries.</p>

<p>At GitHub, we have company-wide policies on opening and reviewing pull requests. Teams that manage one or more services use <a href="https://docs.github.com/en/communities/using-templates-to-encourage-useful-issues-and-pull-requests/creating-a-pull-request-template-for-your-repository">pull request templates</a> for setting guidelines for pull request authors to fill out ahead of opening the pull request. Once these limits are set, each member is responsible for practicing them and they hold each other accountable to follow through with what’s agreed upon.</p>

<h3 id="case-study-github-sponsors-pr-review-emoji-schema">Case Study: GitHub Sponsors’ PR Review Emoji Schema</h3>

<p>One of the problems the GitHub Sponsors team faced in their code reviews is not having a clear understanding of whether comments on their pull request are blocking or not.</p>

<p>After investigation and research, they decided on a new process for using use emojis to indicate the type of review comment that’s being left on a pull request.</p>

<p>Here’s their system:</p>

<ul>
  <li>
    <table>
      <tbody>
        <tr>
          <td>💅🏽</td>
          <td>non-blocking changes</td>
        </tr>
      </tbody>
    </table>
  </li>
  <li>
    <table>
      <tbody>
        <tr>
          <td>🛑</td>
          <td>blocking changes</td>
        </tr>
      </tbody>
    </table>
  </li>
  <li>
    <table>
      <tbody>
        <tr>
          <td>🔜</td>
          <td>fast-follows (address non-blocking changes in a separate PR to come after the one being reviewed)</td>
        </tr>
      </tbody>
    </table>
  </li>
  <li>
    <table>
      <tbody>
        <tr>
          <td>❓</td>
          <td>questions</td>
        </tr>
      </tbody>
    </table>
  </li>
</ul>

<p>Credits go to <a href="https://github.com/knittingarch">@knittingarch</a> and <a href="https://github.com/cheshire137">@cheshire137</a> for doing the work in making this happen, along with the GitHub Sponsors team for their permission to share this case study publicly!</p>

<h3 id="approachability">Approachability</h3>

<p><a href="https://www.etymonline.com/word/approachable">Approachable</a> literally comes from the word “accessible”; in the figurative sense, <a href="https://www.etymonline.com/word/affable#etymonline_v_5189">“affable”</a>, which in context is one that’s “open to conversation or approach.”</p>

<p>In Derek Prior’s <a href="https://www.youtube.com/watch?v=PJjmw9TRB7s">RailsConf 2015 - Implementing a Strong Code-Review Culture</a>, he shares a tip for using pull requests and asking questions as a way of driving technical discussion. He mentioned being aware of the negativity bias that comes from reading words in plain text can help drive our approachability: being intentionally and mindfully positive in the code review comments can let the discussion flow in a way that gives everyone a chance to learn more about the code.</p>

<p>As the author of a pull request, being approachable could look like writing up specific questions for reviewers to comment back on, being clear in what kind of feedback you’re looking for.</p>

<p>As the reviewer of a pull request, being approach could look like asking questions that <a href="https://www.franklincovey.com/habit-5/">seek to understand</a> the author’s perspective, rather than prioritizing seeking to be understood. For example:</p>

<ul>
  <li>“What do you think about…?”</li>
  <li>“With Y in place, what’s your take on considering X as well?”</li>
  <li>“Could you tell me more about your process in…?”</li>
</ul>

<p>Tying back to the McKinsey and Co., team leaders have strongest influence on a team’s psychological safety. Expanding from team leaders, leaders in individual contributor positions ranging in tenure, experience, or level, have an opportunity to set the tone for teammates by how they communicate within code reviews, which in turn, lets their peers match the tone that’s set.</p>

<p>Imagine the ripple effect of folks being approachable with one another and how that could impact code reviews moving forward!</p>

<h3 id="fallibility">Fallibility</h3>

<p>One of the definitions of being fallible means being <a href="https://www.etymonline.com/word/fallible#etymonline_v_1098">liable to err</a>. In practice, it’s about taking the time to let others know that you can make mistakes, and admit to them when they happen. In turn, this lets others know that you’re open!</p>

<p>Examples:</p>

<ul>
  <li>Visibly thank people for catching your mistakes.</li>
  <li>Show your work process, not only the result or where you’ve arrived – tying this to highlighting where you’re unfamiliar with certain areas brings teammates up to speed with where they could provide more precise input. You could even lead with an ask; here are some examples shared by my colleague <a href="https://github.com/UnicodeRogue"><code class="language-plaintext highlighter-rouge">@UnicodeRogue</code></a>:
    <ul>
      <li>“From my perspective, …”</li>
      <li>“My understanding is …”</li>
      <li>“Considering A, I did B because…, but could use some pointers about P and Q, namely…”</li>
    </ul>
  </li>
</ul>

<h3 id="engagement">Engagement</h3>

<p>Engaging people means coming in with the understanding that, when operating with a team, <a href="https://youtu.be/LhoLuui9gX8?t=467">Amy Edmondson describes it like this</a>:</p>

<blockquote>
  <p>Make explicit that there’s enormous uncertainty ahead and enormous interdependence.</p>
</blockquote>

<blockquote>
  <p>Given those two things, we’ve never been here before. We can’t know what will happen.</p>
</blockquote>

<blockquote>
  <p>We’ve got to have everybody’s brains and voices in the game.</p>
</blockquote>

<blockquote>
  <p>That creates the rationale for speaking up.</p>
</blockquote>

<p>Put in my own words:</p>

<blockquote>
  <p>I don’t know what’s ahead, and I need your perspective for all of us to move ahead.</p>
</blockquote>

<blockquote>
  <p>We’re better together.</p>
</blockquote>

<p>This is a mindset. No no, <em>the</em> mindset, in my humble opinion.</p>

<p>How does this look like in practice? While keeping this mindset, I’ve found that using people’s names or online handles while addressing them is a good place to start (please don’t give them a nickname they did not ask for). Echoing back from the points of approachability and fallibility, using curiosity as a way to lead and open technical discussion helps, too. Last, in my own experience, I’ve found that taking the time to reach out to your team member(s) and getting to know them <em>outside</em> of the code review can help with building psychological safety. By knowing how they speak and their approach outside of a review, that’s helped my brain contextualize the plain text in their actual voice, which has helped me better assess what they’re saying versus making an assumption of <em>what I think they’re saying</em>.</p>

<h2 id="whats-next-where-do-i-go-now">What’s next? Where do I go now?</h2>

<p>The examples in SAFE can be things that you could try with your team today. However, building psychological safety within a team is not going to happen over night. Code reviews are but one medium where we can do that. It’s going to take time and every member’s actions to make it happen, so that a resounding “Yes” can be the answer to this question:</p>

<blockquote>
  <p>How confident are you that you won’t receive reatliation or criticism if you admit an error or make a mistake?</p>
</blockquote>

<p>(Source: <a href="https://rework.withgoogle.com/guides/understanding-team-effectiveness/steps/introduction/">Google’s Guide to understanding team effectiveness</a>).</p>

<p>Here are some additional resources that have influenced me that might be worth checking out too:</p>

<ul>
  <li><a href="https://dora.dev/devops-capabilities/cultural/generative-organizational-culture/">DORA: Generative Organizational Culture</a></li>
  <li>Harvard Business Review: <a href="https://hbr.org/2017/08/high-performing-teams-need-psychological-safety-heres-how-to-create-it">High-Performing Teams Need Psychological Safety: Here’s How to Create It</a> by Laura Delizonna</li>
</ul>

<h2 id="special-thanks">Special Thanks</h2>

<p>There were several individuals and teams that made my presentation possible. Here’s a special thanks to you all, <code class="language-plaintext highlighter-rouge">&lt;3</code>:</p>

<ul>
  <li>Day of Learning Organizers</li>
  <li>GitHub Education Team</li>
  <li>Reviewers: Ernest, Ivan, DeeDee, Roniece, Sarah, Daniel, Laura</li>
  <li>Hubber Alumni: keavy, kytrinyx, gjtorikian, jasonrudolph, jnraine</li>
</ul>]]></content><author><name>Francis Batac</name></author><category term="engineering" /><summary type="html"><![CDATA[What would code reviews look like if teams prioritized this?]]></summary></entry><entry><title type="html">Shipping small</title><link href="https://francisfuzz.com/posts/2023/04/06/shipping-small-potent-pull-requests/" rel="alternate" type="text/html" title="Shipping small" /><published>2023-04-06T00:00:00+00:00</published><updated>2023-04-06T00:00:00+00:00</updated><id>https://francisfuzz.com/posts/2023/04/06/shipping-small-potent-pull-requests</id><content type="html" xml:base="https://francisfuzz.com/posts/2023/04/06/shipping-small-potent-pull-requests/"><![CDATA[<p>One of GitHub’s Leadership principles is “ship to learn.” I’ve found that this principle is often effective when I ship small, potent pull requests: a pull request that is small in scope yet has a high impact on the product.</p>

<p>Here’s the <code class="language-plaintext highlighter-rouge">tl;dr</code> version of this post in relation to the above principle:</p>

<ol>
  <li>Before working on a new feature, I should ask myself what the smallest, most potent changes I can make are and write that down in an issue where it’s visible to the team.</li>
  <li>When I start working on a feature, I should open a draft pull request as soon as I have something to show.</li>
  <li>After two working days, if I haven’t marked my pull request as ready for review, I’m probably working on too much and should ask for my team’s input in narrowing the scope of my pull request into smaller, more potent pull requests.</li>
</ol>

<h2 id="the-story">The story</h2>

<p>A few weeks ago, I joined a new track at work focused on shipping a new feature. My track lead gave me the creative autonomy to scope my assigned portion: building a system for managing user permissions.</p>

<p>Our track’s designer shared their designs with me and I used that as a start for my first pull request. I introduced a static version of this system using only <a href="https://primer.style/view-components">Primer View Components</a> and a few of the Rails primitives (routes, controllers, and views). By only focusing on the UI, I was able to ship a small, potent pull request.</p>

<p>Feeling confident, I charged in to work on my next pull request!</p>

<p>Specifically, I started working on a part of the backend that exposed a new method for returning meaningful data in the UI. On Monday, I had a handful of files that were not polished but at the 30% mark of what I felt was ready. On Tuesday and Wednesday, the scope of my pull request grew as I went deeper into development. By Thursday morning, over 20 files had been changed, spanning from updating the views I originally worked on to adding new models and controllers. 😬</p>

<p>When I finally marked my <a href="https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/about-pull-requests#draft-pull-requests">draft pull request</a> ready for review, one of the staff engineers on my team reviewed my work as it was. One of the most valuable questions she asked was if I’d be open to shipping the model-specific changes in separate and earlier pull requests.</p>

<p>I immediately agreed. Writing this out, I’ll go on the record and say that I’m glad I did.</p>

<p>On that Thursday afternoon, I moved the model-specific changes to separate pull requests. This gave me a chance to refactor the new module I introduced and to double-check my tests.</p>

<p>As a part of that work, I found some mistakes in the internal documentation I wrote for that module. For context, the module is written to be mixed into other models. I noticed, however, that I had written the documentation as if it were intended to only be mixed into one kind of model.</p>

<p>I updated it by making the documentation more generic:</p>

<div class="language-diff highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="gd">-    # Public: Returns the authors of a {MODEL_NAME} in a given resource.
</span><span class="gi">+    # Public: Returns the authors in a given resource, depending on the model it's
+    # called on.
</span></code></pre></div></div>

<p>Another mistake I found was in the return value of the method. When I update the method’s API, I forgot to double-check these annotations and took the liberty of updating them:</p>

<div class="language-diff highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="gd">-    # Returns an ordered Array of authors.
</span><span class="gi">+    # Returns an Array of Arrays, where each element contains the author's username and ID.
</span></code></pre></div></div>

<p>By having only four files to focus on rather than twenty, I felt that my self-review was much more focused and something I could confidently share with my reviewers to help them understand the changes I made.</p>

<p>When I re-requested her review near my end-of-day, she gave me very focused feedback on the tests that implicitly taught me of another way of writing the same test fixtures in a more concise way using <a href="https://www.rubydoc.info/gems/factory_bot/FactoryBot/Syntax/Methods:create_list">FactoryBot’s <code class="language-plaintext highlighter-rouge">create_list</code> method</a>:</p>

<div class="language-diff highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="gd">-      # Before: use Ruby's `times` method to create two instances of a model
-
-      2.times do
-        create(:some_resource, post: @unanswered_post, user: author_one)
-      end
</span>
+      # After: use FactoryBot's `create_list` method to create two instances of a model
<span class="gi">+
+      create_list(:some_resource, 2, post: @unanswered_post, user: author_one)
</span></code></pre></div></div>

<p>I came back to work on Friday morning. My goal was to address her feedback by making those requested changes and ship that pull request. After requesting another review, she approved the changes and I spent the rest of the day shepherding it through our deployment queue to get it into production! 🚀</p>

<h2 id="for-the-future">For the future</h2>

<p>My <code class="language-plaintext highlighter-rouge">tl;dr</code> captures the essence of what I’d like to try. However, something that I recently came across months ago is this practice of <a href="https://github.blog/2020-05-21-github-protips-tips-tricks-hacks-and-secrets-from-sarah-vessels/#daisy-chaining-pull-requests">Daisy-chaining pull requests</a>. <a href="https://github.com/cheshire137"><code class="language-plaintext highlighter-rouge">@cheshire137</code></a> wrote about this at length in <a href="https://github.blog/2020-05-21-github-protips-tips-tricks-hacks-and-secrets-from-sarah-vessels/">GitHub Protips: Tips, tricks, hacks, and secrets from Sarah Vessels</a> and I’ll write about it in a future post!</p>]]></content><author><name>Francis Batac</name></author><category term="engineering" /><summary type="html"><![CDATA[Potent Pull Requests]]></summary></entry><entry><title type="html">Seeking Clarity Without Being A Jerk</title><link href="https://francisfuzz.com/posts/2021/12/15/seeking-clarity-without-being-a-jerk/" rel="alternate" type="text/html" title="Seeking Clarity Without Being A Jerk" /><published>2021-12-15T00:00:00+00:00</published><updated>2021-12-15T00:00:00+00:00</updated><id>https://francisfuzz.com/posts/2021/12/15/seeking-clarity-without-being-a-jerk</id><content type="html" xml:base="https://francisfuzz.com/posts/2021/12/15/seeking-clarity-without-being-a-jerk/"><![CDATA[<p>I wrote this many moons ago when I still worked as a program manager for the <a href="https://github.community/">GitHub Support Community forum</a>. Back then, I focused on understanding and improving our problem resolution rate. The nature of my work was strategic, creative, and collaborative and my projects varied from researching, ideating, and creating content for helping first-time community members to collaborating with other teams to tackle a backlog of topics in our forum.</p>

<h2 id="lets-talk-about-ambiguity-️">Let’s talk about ambiguity 💁‍♀️</h2>

<p>One of the key skills I’m learning to develop is identifying assumptions and asking questions, especially when new projects come. Some projects already have a clear goal in mind, while others don’t.</p>

<p>For the latter, there are times when I feel like this:</p>

<p><img src="https://media4.giphy.com/media/cLw4RZHnJCIcR9uMhP/giphy.gif?cid=ecf05e472y57z3fbfzzz6gz5hr0j7golzahpw9c2e0hzkgre&amp;rid=giphy.gif" width="400" alt="What am I looking at" /></p>

<p>There are times when I go through “paralysis analysis” where I think more about what to do than actually doing it. Something that’s helped me move forward is asking for help.</p>

<p>Recently, one of the ways that I asked for help with understanding this problem space better is by joining our company’s <code class="language-plaintext highlighter-rouge">#how-do-i</code> channel. The channel describes itself as a “safe place to start if you have questions about how to do or find something at GitHub.”</p>

<p>The channel lives up to its name because of the many Hubbers who generously share their time and expertise. For this key skill, I asked them:</p>

<blockquote>
  <p><em>How do I learn more about asking the right questions to clarify ambiguous work requirements?</em></p>
</blockquote>

<p>Shortly after posting the questions, I received a stream of responses, resources, and ridiculously good advice. I’ve done the work of summarizing it so you can read and apply it when the situation arises. 😉</p>

<h2 id="the-tricky-thing-about-just-asking-why-">The tricky thing about just asking… “why?” 😬</h2>

<p><img src="https://media0.giphy.com/media/duJFJwEwvf5KPTMZSi/giphy.gif?cid=ecf05e47j35ndzh24dympg4bndveqzf82anr84lsvz29rbs5&amp;rid=giphy.gif" width="400" alt="Moira asking why" /></p>

<p>When faced with ambiguity, it’s natural for us to just ask “why?”</p>

<p>Knowing the motivation behind a project is a key factor before choosing to invest more time and resources.</p>

<p>The way we find out about the motivation is also important depending on the outcome we want to drive.</p>

<p>Cindy, one of my colleagues,  helped me understand that anytime we ask a plain “why?”, it can sound like we’re being defensive or reluctant to help. 😬</p>

<h2 id="repositioning-why-with-curiosity-">Repositioning “why” with curiosity 👩‍🔬</h2>

<p><img src="https://media4.giphy.com/media/h81fYY4QWj4hlEuqiN/giphy.gif?cid=ecf05e47o5dbvz3g3i1ko3iohsn7xcifzr3nqfqodpgyokey&amp;rid=giphy.gif" width="400" alt="Colour me curious" /></p>

<p>We can reposition this single-word question to exercise curiosity starting  with these expressions:</p>

<ul>
  <li>“Just to be sure I’m clear…”</li>
  <li>“I want to make sure I know the best way to help…”</li>
  <li>“Tell me more …”</li>
</ul>

<p>And then leading into the question:</p>

<ul>
  <li>“What’s the underlying problem you’re trying to solve?”</li>
  <li>“If we had that completed already, what’s the benefit you’d expect to get?”</li>
  <li>“What does your ideal outcome or end goal look like?”</li>
</ul>

<p>It can also help to include add a reason after asking the question:</p>

<ul>
  <li>“That way, we can figure out the best options together.”</li>
  <li>“I want to make sure I’m answering the most relevant questions for you.”</li>
</ul>

<p>Take the time to listen to these responses. Write them down and consider the other person. If this doesn’t “work” for any reason, you can pivot the approach by stating your assumptions up front instead:</p>

<ul>
  <li>“I’ve read your [doc/specification/ticket] and here are my assumptions. There’s almost certainly something that is missing or needs correcting, so please do so!
    <ul>
      <li>Assumption: X is higher priority than Y</li>
      <li>Assumption: X only happens in Y situation</li>
      <li>Assumption: The reason X is a problem is Y.</li>
    </ul>
  </li>
</ul>

<p>It’s much easier for people to react to assumptions than open-ended questions so you tend to get quick responses.</p>

<p>This approach can feel abrupt or offhand the first time you do it. It may also help to add a disclaimer that states the assumptions first and offer to meet with them over a call to clarify things further:</p>

<blockquote>
  <p><em>Hey, I’m going to state some assumptions in the hopes that can shorten our meetings. If these are completely off-base, let’s jump on zoom to discuss in real-time.</em></p>
</blockquote>

<p>Going deeper, my awesome colleague Lizzy shared an experience around the difficulties of discussing work requirements if there’s a technical knowledge difference between two parties:</p>

<blockquote>
  <p><em>I’ve found work requirements are harder to discuss if there is a technical knowledge difference between the two parties and one of my roles is to remind my teammates that they need to break down the technical components for me (and realistically for others reading who are not security SMEs).  I’m not really afraid of looking like I know nothing anymore, but it can be hard to constantly out yourself as not very technical.  It can feel like the answers to some of Cindy’s questions are obvious to everyone else, but it’s surprising how often that is not the case.</em></p>
</blockquote>

<p>Cindy re-assured that it’s okay to acknowledge those feelings and confronting the sentiments of the questions head-on:</p>

<blockquote>
  <p><em>I often tell people “it will feel awkward and embarrassing to ask these questions!” And unfortunately it doesn’t really go away - I still feel a little anxious when asking people to clarify something or step back and re-state the problem.  What helps is that I’ve seen it work every time: unless people are really resisting out of bad faith, they will answer and someone else in the room besides me will say “oh! I didn’t realize that”.</em></p>
</blockquote>

<p>Lizzy also highlighted identifying assumptions as one of the primary technical skills as a Program Manager and the power of mindfulness:</p>

<blockquote>
  <p><em>I try to keep in mind as well that people don’t want to assume I know nothing, because that’s also a bad look.  I remind myself that identifying assumptions is part of my technical skill as a PM.</em></p>
</blockquote>

<blockquote>
  <p><em>I’m also big on self-deprecating humor because not all assumptions are done with poor intent and it helps set the mood as “I need to know this thing, but I don’t need to focus on who’s fault it is that you didn’t already tell me this thing.”  Obviously, there are also times where you need to focus on how the assumption has affected progress or is belittling, but especially here, I have had wayyy more instances of the former.</em></p>
</blockquote>

<h2 id="going-deeper-">Going deeper 🐇</h2>

<p><img src="https://media2.giphy.com/media/3o6Ygj9fubFPnKVFN6/200w.webp?cid=ecf05e47joudwodcz69envthczwb8tk89tavh691ujddh2wb&amp;rid=200w.webp" width="400" alt="Tell me more" /></p>

<p>A number of my colleagues suggested these books to learn more about the topic:</p>

<ul>
  <li><a href="https://meta.stackexchange.com/questions/66377/what-is-the-xy-problem">StackExchange: “What is the XY problem?”</a></li>
  <li><a href="https://www.amazon.com/Domain-Driven-Design-Tackling-Complexity-Software/dp/0321125215">Domain-driven Design: Tackling Complexity in the Heart of Software</a></li>
  <li><a href="https://www.amazon.com/Agile-Retrospectives-Making-Teams-Great/dp/0977616649">Agile Retrospectives: Making Good Teams Great</a></li>
  <li><a href="https://www.amazon.com/Project-Retrospectives-Handbook-Reviews-Dorset-ebook/dp/B00DY3KQJU">Project Retrospectives: A Handbook for Team Reviews</a></li>
  <li><a href="https://www.amazon.com/Power-Positive-No-Relationship-Still/dp/0553384260">The Power of a Positive No: Save The Deal Save The Relationship and Still Say No</a></li>
</ul>]]></content><author><name>Francis Batac</name></author><category term="communication" /><summary type="html"><![CDATA[How to reduce ambiguity when presented unclear expectations at work]]></summary></entry></feed>