<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Code and Compass]]></title><description><![CDATA[Code hard. Travel far.]]></description><link>https://www.codeandcompass.net/</link><generator>Ghost 5.80</generator><lastBuildDate>Thu, 30 Apr 2026 13:17:54 GMT</lastBuildDate><atom:link href="https://www.codeandcompass.net/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Ship Your MVP from Home: Ditch Expensive Servers with Cloudflare Tunnel]]></title><description><![CDATA[<h1 id></h1><p>You have a powerful computer sitting under your desk. Meanwhile, you&apos;re paying $50&#x2013;200/month to run your MVP on someone else&apos;s computer. Let&apos;s fix that.</p><p>This guide walks through how I set up a home server with Cloudflare Tunnel to host multiple</p>]]></description><link>https://www.codeandcompass.net/ship-your-mvp-from-home-ditch-expensive-servers-with-cloudflare-tunnel/</link><guid isPermaLink="false">699953be23c1e40001f99444</guid><dc:creator><![CDATA[Kevin Edraki]]></dc:creator><pubDate>Sat, 21 Feb 2026 06:46:02 GMT</pubDate><media:content url="https://www.codeandcompass.net/content/images/2026/02/Gemini_Generated_Image_qd3qtjqd3qtjqd3q.png" medium="image"/><content:encoded><![CDATA[<h1 id></h1><img src="https://www.codeandcompass.net/content/images/2026/02/Gemini_Generated_Image_qd3qtjqd3qtjqd3q.png" alt="Ship Your MVP from Home: Ditch Expensive Servers with Cloudflare Tunnel"><p>You have a powerful computer sitting under your desk. Meanwhile, you&apos;re paying $50&#x2013;200/month to run your MVP on someone else&apos;s computer. Let&apos;s fix that.</p><p>This guide walks through how I set up a home server with Cloudflare Tunnel to host multiple web apps &#x2014; for $0/month in infrastructure costs. I&apos;ll also share a real debugging war story about HTTPS that&apos;ll save you hours.</p><hr><h2 id="the-problem">The Problem</h2><p>Cloud server costs add up fast when you&apos;re building:</p>
<!--kg-card-begin: html-->
<table>
<thead>
<tr>
<th>Service</th>
<th>Monthly Cost</th>
</tr>
</thead>
<tbody>
<tr>
<td>DigitalOcean Droplet (4GB)</td>
<td>$24</td>
</tr>
<tr>
<td>AWS EC2 t3.medium</td>
<td>$30</td>
</tr>
<tr>
<td>Heroku (2 dynos + DB)</td>
<td>$50+</td>
</tr>
<tr>
<td>Vercel Pro</td>
<td>$20</td>
</tr>
<tr>
<td>Managed DB (Postgres/MySQL)</td>
<td>$15&#x2013;50</td>
</tr>
</tbody>
</table>
<!--kg-card-end: html-->
<p>Run two or three apps and you&apos;re easily at <strong>$100&#x2013;200/month</strong> before you have a single paying customer.</p><p>Meanwhile, your home machine &#x2014; probably 16&#x2013;64 GB RAM, 8+ cores, 1 TB SSD &#x2014; is idle 80% of the time.</p><p>The catch has always been: how do you expose a home machine to the internet without a static IP, port forwarding, and a prayer?</p><p><strong>Cloudflare Tunnel.</strong></p><hr><h2 id="the-architecture">The Architecture</h2><p>Here&apos;s how traffic flows:</p><pre><code>Browser (HTTPS)
    &#x2193;
Cloudflare Edge (terminates TLS)
    &#x2193; (encrypted tunnel, outbound from your machine)
cloudflared daemon (your home machine)
    &#x2193; (HTTP)
Traefik reverse proxy (port 80)
    &#x2193; (HTTP)
Your app containers (Ghost, Postgres, whatever)
</code></pre><p>The key insight: <strong>your machine makes an outbound connection to Cloudflare.</strong> No port forwarding. No static IP. No firewall holes. Cloudflare routes incoming requests back through that persistent connection.</p><hr><h2 id="what-you-need">What You Need</h2><ul><li><strong>A home machine</strong> &#x2014; Linux recommended (Ubuntu/Debian). Any old desktop or mini PC works.</li><li><strong>Cloudflare account</strong> &#x2014; Free tier is enough.</li><li><strong>A domain</strong> &#x2014; Pointed to Cloudflare&apos;s nameservers.</li><li><strong>Docker</strong> &#x2014; For running your apps.</li><li><strong>Coolify</strong> &#x2014; Self-hosted PaaS (free Heroku alternative). Or use Docker Compose directly.</li></ul><hr><h2 id="step-1-install-cloudflared">Step 1: Install cloudflared</h2><p>Install the Cloudflare Tunnel daemon:</p><pre><code class="language-bash"># Debian/Ubuntu
curl -L https://github.com/cloudflare/cloudflared/releases/latest/download/cloudflared-linux-amd64.deb -o cloudflared.deb
sudo dpkg -i cloudflared.deb

# Authenticate with your Cloudflare account
cloudflared tunnel login

# Create a tunnel
cloudflared tunnel create office
</code></pre><p>This gives you a tunnel ID and a credentials JSON file. Note both &#x2014; you&apos;ll need them.</p><p>Install it as a system service so it survives reboots:</p><pre><code class="language-bash">sudo cloudflared service install
sudo systemctl enable cloudflared
sudo systemctl start cloudflared
</code></pre><hr><h2 id="step-2-configure-the-tunnel">Step 2: Configure the Tunnel</h2><p>Here&apos;s my real <code>config.yml</code> (at <code>~/.cloudflared/config.yml</code>):</p><pre><code class="language-yaml">tunnel: a564a227-4d0d-4d12-92fe-e88298de08e8
credentials-file: /home/kevin/.cloudflared/a564a227-4d0d-4d12-92fe-e88298de08e8.json

ingress:
  # Coolify dashboard
  - hostname: coolify.httprapidoccasions.com
    service: http://localhost:8000

  # SSH access (must be before wildcard)
  - hostname: ssh.httprapidoccasions.com
    service: tcp://localhost:22

  # Wildcard &#x2014; all deployed apps route through Coolify&apos;s Traefik
  - hostname: &quot;*.httprapidoccasions.com&quot;
    service: http://localhost:80

  # Catch-all
  - service: http_status:404
</code></pre><p>The magic is the <strong>wildcard rule</strong>. Any subdomain (<code>app1.yourdomain.com</code>, <code>app2.yourdomain.com</code>) routes to Traefik on port 80, which then routes to the right container. Deploy a new app, give it a subdomain &#x2014; it just works.</p><figure class="kg-card kg-image-card"><img src="https://www.codeandcompass.net/content/images/2026/02/Gemini_Generated_Image_w9hsmjw9hsmjw9hs--1-.png" class="kg-image" alt="Ship Your MVP from Home: Ditch Expensive Servers with Cloudflare Tunnel" loading="lazy" width="1024" height="1024" srcset="https://www.codeandcompass.net/content/images/size/w600/2026/02/Gemini_Generated_Image_w9hsmjw9hsmjw9hs--1-.png 600w, https://www.codeandcompass.net/content/images/size/w1000/2026/02/Gemini_Generated_Image_w9hsmjw9hsmjw9hs--1-.png 1000w, https://www.codeandcompass.net/content/images/2026/02/Gemini_Generated_Image_w9hsmjw9hsmjw9hs--1-.png 1024w" sizes="(min-width: 720px) 720px"></figure><h3 id="dns-setup">DNS Setup</h3><p>In your Cloudflare dashboard, add a wildcard CNAME:</p><pre><code>Type: CNAME
Name: *
Target: &lt;tunnel-id&gt;.cfargotunnel.com
Proxy: enabled (orange cloud)
</code></pre><p>And one for the base domain or specific subdomains as needed.</p><hr><h2 id="step-3-set-up-coolify">Step 3: Set Up Coolify</h2><p><a href="https://coolify.io/?ref=codeandcompass.net">Coolify</a> is a self-hosted PaaS &#x2014; think Heroku or Vercel, but free and running on your own machine. It gives you:</p><ul><li>Git push deployments</li><li>Automatic SSL (via Traefik, which it manages)</li><li>Database provisioning (Postgres, MySQL, Redis, etc.)</li><li>A web dashboard for managing everything</li></ul><p>Install with one command:</p><pre><code class="language-bash">curl -fsSL https://cdn.coollabs.io/coolify/install.sh | bash
</code></pre><p>After installation, access the dashboard at <code>http://localhost:8000</code> (or through your tunnel at <code>coolify.yourdomain.com</code>).</p><p>Coolify runs Traefik as its reverse proxy on port 80. This is what the wildcard ingress rule in our tunnel config points to.</p><hr><h2 id="step-4-deploy-your-first-app">Step 4: Deploy Your First App</h2><p>Let&apos;s deploy Ghost (the blogging platform) as an example.</p><p>In Coolify:</p><ol><li><strong>New Resource</strong> &#x2192; <strong>Docker Compose</strong></li><li>Paste your compose config or connect a Git repo</li><li>Set the domain to <code>blog.yourdomain.com</code></li><li>Deploy</li></ol><p>Coolify handles the Traefik labels, container networking, and routing. Your app is live at <code>https://blog.yourdomain.com</code> within minutes.</p><p>Want another app? Same process, different subdomain. The wildcard tunnel + wildcard DNS means <strong>zero infrastructure changes per app.</strong></p><hr><h2 id="the-https-gotcha-this-will-save-you-hours">The HTTPS Gotcha (This Will Save You Hours)</h2><p>Here&apos;s where I burned time so you don&apos;t have to.</p><h3 id="the-symptom">The Symptom</h3><p>After deploying Ghost behind Cloudflare Tunnel, things looked wrong:</p><ul><li>Infinite redirect loops (<code>ERR_TOO_MANY_REDIRECTS</code>)</li><li>Mixed content warnings &#x2014; the page loads but CSS/JS references <code>http://</code> URLs</li><li>Admin panel redirecting to HTTP and breaking</li></ul><h3 id="the-root-cause">The Root Cause</h3><p>The TLS/HTTP handoff creates a mismatch:</p><ol><li><strong>The browser</strong> connects to Cloudflare over <strong>HTTPS</strong></li><li><strong>Cloudflare</strong> terminates TLS and forwards to your tunnel over <strong>HTTP</strong></li><li><strong>Traefik</strong> receives the request as <strong>HTTP</strong></li><li><strong>Your app</strong> sees an HTTP request and thinks: &quot;I&apos;m not on HTTPS&quot; &#x2014; generates HTTP URLs, forces redirects, etc.</li></ol><p>The app doesn&apos;t know the original request was HTTPS because nobody told it.</p><h3 id="the-fix-two-parts">The Fix (Two Parts)</h3><p><strong>Part 1: Traefik middleware to set <code>X-Forwarded-Proto</code></strong></p><p>Add middleware to Traefik that tells your apps &quot;the original request was HTTPS&quot;:</p><pre><code class="language-yaml"># In your Traefik dynamic config or Docker labels:
labels:
  - &quot;traefik.http.middlewares.https-scheme.headers.customrequestheaders.X-Forwarded-Proto=https&quot;
  - &quot;traefik.http.routers.your-app.middlewares=https-scheme&quot;
</code></pre><p>This injects the <code>X-Forwarded-Proto: https</code> header into every request reaching your app.</p><p><strong>Part 2: Tell your app to trust the proxy</strong></p><p>Most web frameworks ignore <code>X-Forwarded-*</code> headers by default (to prevent spoofing). You need to explicitly enable proxy trust.</p><p>For <strong>Ghost</strong>, set this environment variable:</p><pre><code class="language-yaml">environment:
  server__trustProxy: &quot;true&quot;
</code></pre><p>For other frameworks:</p>
<!--kg-card-begin: html-->
<table>
<thead>
<tr>
<th>Framework</th>
<th>Setting</th>
</tr>
</thead>
<tbody>
<tr>
<td>Express.js</td>
<td><code>app.set(&apos;trust proxy&apos;, true)</code></td>
</tr>
<tr>
<td>Django</td>
<td><code>SECURE_PROXY_SSL_HEADER = (&apos;HTTP_X_FORWARDED_PROTO&apos;, &apos;https&apos;)</code></td>
</tr>
<tr>
<td>Rails</td>
<td><code>config.force_ssl = true</code> + <code>config.assume_ssl = true</code></td>
</tr>
<tr>
<td>Laravel</td>
<td>Set <code>TRUSTED_PROXIES=*</code> in <code>.env</code></td>
</tr>
<tr>
<td>Next.js</td>
<td>Generally works out of the box with headers</td>
</tr>
<tr>
<td>Flask</td>
<td><code>ProxyFix(app, x_proto=1)</code></td>
</tr>
</tbody>
</table>
<!--kg-card-end: html-->
<p><strong>Part 3: Cloudflare SSL setting</strong></p><p>In Cloudflare Dashboard &#x2192; SSL/TLS, set the encryption mode to <strong>Full</strong> (not &quot;Flexible&quot;). Flexible tries to connect to your origin over HTTP and can cause redirect loops when your app expects HTTPS.</p><h3 id="the-pattern">The Pattern</h3><p>This three-part fix works for <strong>every</strong> app behind Cloudflare Tunnel + Traefik:</p><ol><li>Traefik injects <code>X-Forwarded-Proto: https</code></li><li>App trusts proxy headers</li><li>Cloudflare SSL mode set to Full</li></ol><p>Internalize this pattern. You&apos;ll use it every time.</p><hr><h2 id="cost-comparison">Cost Comparison</h2>
<!--kg-card-begin: html-->
<table>
<thead>
<tr>
<th></th>
<th>Home Server</th>
<th>DigitalOcean</th>
<th>AWS</th>
</tr>
</thead>
<tbody>
<tr>
<td>Compute</td>
<td>$0 (existing hardware)</td>
<td>$24&#x2013;48/mo</td>
<td>$30&#x2013;60/mo</td>
</tr>
<tr>
<td>Domain</td>
<td>~$10/year</td>
<td>~$10/year</td>
<td>~$10/year</td>
</tr>
<tr>
<td>Cloudflare</td>
<td>$0 (free tier)</td>
<td>N/A</td>
<td>N/A</td>
</tr>
<tr>
<td>SSL</td>
<td>$0 (Cloudflare)</td>
<td>$0 (Let&apos;s Encrypt)</td>
<td>$0 (ACM)</td>
</tr>
<tr>
<td>Managed DB</td>
<td>$0 (self-hosted)</td>
<td>$15&#x2013;50/mo</td>
<td>$15&#x2013;100/mo</td>
</tr>
<tr>
<td>Load balancer</td>
<td>$0 (Traefik)</td>
<td>$12/mo</td>
<td>$16/mo</td>
</tr>
<tr>
<td><strong>Total (3 apps)</strong></td>
<td><strong>~$1/mo</strong> (electricity)</td>
<td><strong>$75&#x2013;150/mo</strong></td>
<td><strong>$100&#x2013;250/mo</strong></td>
</tr>
<tr>
<td><strong>Annual savings</strong></td>
<td>&#x2014;</td>
<td><strong>$900&#x2013;1,800</strong></td>
<td><strong>$1,200&#x2013;3,000</strong></td>
</tr>
</tbody>
</table>
<!--kg-card-end: html-->
<p>The home server costs you electricity and a domain. That&apos;s it. For MVP stage, that&apos;s the right answer.</p><hr><h2 id="when-to-move-to-the-cloud">When to Move to the Cloud</h2><p>The home setup is great for MVPs and early traction, but you&apos;ll outgrow it when:</p><ul><li><strong>You need uptime guarantees.</strong> Home internet goes down. Power goes out. Your setup has zero redundancy. If a customer SLA matters, move to the cloud.</li><li><strong>Latency matters.</strong> Your home server is in one location. Cloud providers have regions worldwide. If users in Asia are hitting a server in your apartment in Ohio, they&apos;ll notice.</li><li><strong>You&apos;re scaling past your hardware.</strong> When your apps need more RAM/CPU than your machine has, it&apos;s time.</li><li><strong>Security requirements increase.</strong> Enterprise customers will ask about SOC 2, data residency, etc. &quot;It&apos;s in my office&quot; is not the answer they want.</li><li><strong>Your team grows.</strong> Multiple engineers need access to infrastructure. Cloud platforms have better IAM, audit logs, and collaboration tools.</li></ul><p>The good news: if you&apos;re using Docker and Coolify, migrating is straightforward. Export your compose files, spin up a VPS, deploy. The app doesn&apos;t care where it runs.</p><hr><h2 id="conclusion">Conclusion</h2><p>The best infrastructure for an MVP is the one that costs nothing and gets out of your way.</p><p>Cloudflare Tunnel + a home machine + Coolify gives you:</p><ul><li><strong>Unlimited apps</strong> on a single machine with wildcard routing</li><li><strong>HTTPS everywhere</strong> with zero certificate management</li><li><strong>Git-push deploys</strong> through Coolify&apos;s dashboard</li><li><strong>$0/month</strong> infrastructure cost</li></ul><p>Stop paying cloud providers to run your prototype. Ship from home, get users, validate the idea &#x2014; <em>then</em> spend money on infrastructure when the revenue justifies it.</p><hr><h2 id="copying-this-article-to-another-machine">Copying This Article to Another Machine</h2><p>Since the home server has SSH access through the tunnel, you can pull this file from any machine:</p><pre><code class="language-bash">scp kevin@ssh.httprapidoccasions.com:/home/kevin/Development/personal/innovate-hub/infra/office-tunnel/blog-home-server-mvp.md ./blog-home-server-mvp.md
</code></pre><p>Or if using a non-standard SSH port or key:</p><pre><code class="language-bash">scp -i ~/.ssh/your_key kevin@ssh.httprapidoccasions.com:/home/kevin/Development/personal/innovate-hub/infra/office-tunnel/blog-home-server-mvp.md .
</code></pre><p>Note: SSH through Cloudflare Tunnel requires the client to use <code>cloudflared access</code> as a proxy. Set up your <code>~/.ssh/config</code>:</p><pre><code>Host ssh.httprapidoccasions.com
    ProxyCommand cloudflared access ssh --hostname %h
    User kevin
</code></pre><p>Then <code>scp</code> works as normal:</p><pre><code class="language-bash">scp ssh.httprapidoccasions.com:/home/kevin/Development/personal/innovate-hub/infra/office-tunnel/blog-home-server-mvp.md .
</code></pre>]]></content:encoded></item><item><title><![CDATA[Stop Losing Your tmux Layouts: A Simple Tool to Save and Restore Your Terminal Setups]]></title><description><![CDATA[<p>If you&apos;re a tmux power user, you know the pain. You&apos;ve spent time crafting the perfect workspace&#x2014;multiple windows for different projects, panes split just right, each one positioned in its correct directory. Then your computer restarts, or you close the terminal by accident, and</p>]]></description><link>https://www.codeandcompass.net/tmux-save-and-load/</link><guid isPermaLink="false">698696257356780001fe4051</guid><dc:creator><![CDATA[Kevin Edraki]]></dc:creator><pubDate>Wed, 18 Feb 2026 22:14:28 GMT</pubDate><media:content url="https://www.codeandcompass.net/content/images/2026/02/Gemini_Generated_Image_cwl2crcwl2crcwl2.png" medium="image"/><content:encoded><![CDATA[<img src="https://www.codeandcompass.net/content/images/2026/02/Gemini_Generated_Image_cwl2crcwl2crcwl2.png" alt="Stop Losing Your tmux Layouts: A Simple Tool to Save and Restore Your Terminal Setups"><p>If you&apos;re a tmux power user, you know the pain. You&apos;ve spent time crafting the perfect workspace&#x2014;multiple windows for different projects, panes split just right, each one positioned in its correct directory. Then your computer restarts, or you close the terminal by accident, and it&apos;s all gone.</p><p>What if you could snapshot your entire tmux setup and restore it with a single command?</p><p>That&apos;s exactly what this toolkit does.</p><h2 id="the-problem-with-tmux-sessions">The Problem with Tmux Sessions</h2><p>Tmux is incredible for managing terminal workflows. It lets you:</p><ul><li>Create multiple windows within a single terminal</li><li>Split windows into panes</li><li>Detach and reattach sessions</li><li>Keep processes running in the background</li></ul><p>But here&apos;s what tmux <em>doesn&apos;t</em> do well: <strong>persistent layouts</strong>.</p><p>Sure, you can detach a session and it keeps running. But what about:</p><ul><li>Saving a layout to use on a different machine?</li><li>Keeping multiple layout configurations for different types of work?</li><li>Recovering your setup after a reboot?</li><li>Sharing your workspace configuration with teammates?</li></ul><p>That&apos;s where these scripts come in.</p><h2 id="the-toolkit-three-simple-commands">The Toolkit: Three Simple Commands</h2><p>This toolkit consists of three bash scripts that work together:</p>
<!--kg-card-begin: html-->
<table>
<thead>
<tr>
<th>Command</th>
<th>Purpose</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>tmux-save</code></td>
<td>Save your current tmux session layout</td>
</tr>
<tr>
<td><code>tmux-load</code></td>
<td>Restore a previously saved layout</td>
</tr>
<tr>
<td><code>tmux-list</code></td>
<td>View and manage your saved layouts</td>
</tr>
</tbody>
</table>
<!--kg-card-end: html-->
<p>All layouts are stored in <code>~/.tmux-layouts/</code> as simple, human-readable files.</p><hr><h2 id="installation">Installation</h2><ol><li>Clone or copy the three scripts to a directory in your PATH:</li></ol><pre><code class="language-bash"># Option 1: Copy to /usr/local/bin
sudo cp tmux-save tmux-load tmux-list /usr/local/bin/

# Option 2: Add the script directory to your PATH
echo &apos;export PATH=&quot;$PATH:/path/to/tmux-tools&quot;&apos; &gt;&gt; ~/.bashrc
</code></pre><ol><li>Make them executable:</li></ol><pre><code class="language-bash">chmod +x tmux-save tmux-load tmux-list
</code></pre><p>That&apos;s it. No dependencies beyond bash and tmux itself.</p><hr><h2 id="how-it-works">How It Works</h2><h3 id="saving-a-layout-tmux-save">Saving a Layout: <code>tmux-save</code></h3><p>When you run <code>tmux-save</code>, it captures:</p><ul><li><strong>Every window</strong> in your current session (name and layout)</li><li><strong>Every pane</strong> within each window (position, working directory, running command)</li><li><strong>The exact layout geometry</strong> so panes are restored to the same proportions</li></ul><p><strong>Usage:</strong></p><pre><code class="language-bash"># Save with a specific name
tmux-save myproject

# Or run without arguments to be prompted for a name
tmux-save
# Enter a name for this layout: myproject
</code></pre><p><strong>Output:</strong></p><pre><code>&#x2705; Layout &apos;myproject&apos; saved to /Users/you/.tmux-layouts/myproject.layout
</code></pre><p><strong>What gets saved (example layout file):</strong></p><pre><code># tmux layout: myproject
# saved: Thu Feb 6 17:30:00 PST 2026
# session: dev

WINDOW|0|editor|a]f6,208x54,0,0{104x54,0,0,1,103x54,105,0,2}
PANE|0|0|/Users/you/projects/myapp|nvim
PANE|0|1|/Users/you/projects/myapp|zsh
WINDOW|1|servers|5b23,208x54,0,0,3
PANE|1|0|/Users/you/projects/myapp|npm
WINDOW|2|logs|5b24,208x54,0,0,4
PANE|2|0|/var/log|tail
</code></pre><p>The format is intentionally simple. You can even edit these files by hand if needed.</p><hr><h3 id="loading-a-layout-tmux-load">Loading a Layout: <code>tmux-load</code></h3><p>Restoring a layout is just as easy. <code>tmux-load</code> handles three scenarios automatically:</p><ol><li><strong>Running outside tmux</strong> &#x2014; Creates and attaches to a new session</li><li><strong>Running inside tmux</strong> &#x2014; Creates the session and switches to it</li><li><strong>Reloading the current session</strong> &#x2014; Rebuilds in place without losing your terminal</li></ol><p><strong>Usage:</strong></p><pre><code class="language-bash"># Load by name directly
tmux-load myproject

# Or run without arguments to see a menu
tmux-load
</code></pre><p><strong>Interactive menu output:</strong></p><pre><code>&#x1F4CB; Saved tmux layouts:

  1) myproject  (Thu Feb 6 17:30:00 PST 2026)
  2) backend-work  (Wed Feb 5 09:15:00 PST 2026)
  3) writing  (Mon Feb 3 14:22:00 PST 2026)

Select a layout (number) or &apos;q&apos; to quit: 1
</code></pre><p><strong>What happens when you load:</strong></p><ul><li>A new tmux session is created with the name <code>tmux-&lt;layout-name&gt;</code></li><li>Each window from the saved layout is recreated with its original name</li><li>Each pane is recreated, positioned, and <code>cd</code>&apos;d to its saved directory</li><li>The original window layouts (split proportions) are restored</li></ul><p><strong>Result:</strong></p><pre><code>&#x2705; Layout &apos;myproject&apos; restored as session &apos;tmux-myproject&apos;
</code></pre><hr><h3 id="managing-layouts-tmux-list">Managing Layouts: <code>tmux-list</code></h3><p>See all your saved layouts at a glance:</p><pre><code class="language-bash">tmux-list
</code></pre><p><strong>Output:</strong></p><pre><code>&#x1F4CB; Saved tmux layouts:

  &#x1F4C1; myproject
     Saved: Thu Feb 6 17:30:00 PST 2026 | Session: dev | Windows: 3 | Panes: 4

  &#x1F4C1; backend-work
     Saved: Wed Feb 5 09:15:00 PST 2026 | Session: api | Windows: 2 | Panes: 5

  &#x1F4C1; writing
     Saved: Mon Feb 3 14:22:00 PST 2026 | Session: docs | Windows: 1 | Panes: 2
</code></pre><p><strong>Delete a layout:</strong></p><pre><code class="language-bash">tmux-list -d
# or
tmux-list --delete
</code></pre><p>This shows an interactive menu to select and delete a layout.</p><hr><h2 id="real-world-use-cases">Real-World Use Cases</h2><h3 id="1-project-specific-workspaces">1. Project-Specific Workspaces</h3><p>Save different layouts for different projects:</p><pre><code class="language-bash"># Working on the frontend
tmux-save frontend-dev

# Switch to backend work
tmux-load backend-dev
</code></pre><h3 id="2-role-based-setups">2. Role-Based Setups</h3><p>Create layouts for different types of work:</p><ul><li><code>tmux-save coding</code> &#x2014; Editor + terminal + test runner</li><li><code>tmux-save debugging</code> &#x2014; Logs + debugger + shell</li><li><code>tmux-save writing</code> &#x2014; Clean single-pane setup for documentation</li></ul><h3 id="3-machine-migration">3. Machine Migration</h3><p>Moving to a new laptop? Copy your <code>~/.tmux-layouts/</code> directory and all your workspace configurations come with you.</p><h3 id="4-team-standardization">4. Team Standardization</h3><p>Share layout files with your team. Everyone gets the same development environment structure.</p><hr><h2 id="tips-and-tricks">Tips and Tricks</h2><h3 id="alias-for-quick-access">Alias for Quick Access</h3><p>Add these to your <code>.bashrc</code> or <code>.zshrc</code>:</p><pre><code class="language-bash">alias ts=&apos;tmux-save&apos;
alias tl=&apos;tmux-load&apos;
alias tls=&apos;tmux-list&apos;
</code></pre><h3 id="combine-with-tmux-hooks">Combine with tmux Hooks</h3><p>You can auto-save your layout when detaching:</p><pre><code class="language-bash"># In ~/.tmux.conf
set-hook -g client-detached &apos;run-shell &quot;tmux-save autosave&quot;&apos;
</code></pre><h3 id="create-a-default-layout">Create a &quot;Default&quot; Layout</h3><p>Save your ideal starting point:</p><pre><code class="language-bash">tmux-save default
</code></pre><p>Then start every day fresh:</p><pre><code class="language-bash">tmux-load default
</code></pre><hr><h2 id="how-it-compares-to-alternatives">How It Compares to Alternatives</h2>
<!--kg-card-begin: html-->
<table>
<thead>
<tr>
<th>Feature</th>
<th>tmux-resurrect</th>
<th>tmuxinator</th>
<th>This Toolkit</th>
</tr>
</thead>
<tbody>
<tr>
<td>Save running sessions</td>
<td>&#x2705;</td>
<td>&#x274C;</td>
<td>&#x2705;</td>
</tr>
<tr>
<td>YAML config files</td>
<td>&#x274C;</td>
<td>&#x2705;</td>
<td>&#x274C;</td>
</tr>
<tr>
<td>Multiple named layouts</td>
<td>&#x274C;</td>
<td>&#x2705;</td>
<td>&#x2705;</td>
</tr>
<tr>
<td>Zero dependencies</td>
<td>&#x274C; (plugins)</td>
<td>&#x274C; (ruby)</td>
<td>&#x2705;</td>
</tr>
<tr>
<td>Human-readable saves</td>
<td>&#x274C;</td>
<td>&#x2705;</td>
<td>&#x2705;</td>
</tr>
<tr>
<td>Works anywhere</td>
<td>Plugin needed</td>
<td>Gem needed</td>
<td>Just bash</td>
</tr>
</tbody>
</table>
<!--kg-card-end: html-->
<p>This toolkit sits in a sweet spot: simpler than tmuxinator (no YAML to write), more flexible than tmux-resurrect (multiple named layouts), and zero dependencies.</p><hr><h2 id="limitations">Limitations</h2><p>To keep things simple, these scripts don&apos;t capture:</p><ul><li><strong>Running processes</strong> &#x2014; The command is noted but not restarted</li><li><strong>Shell history</strong> &#x2014; Each pane starts fresh</li><li><strong>Scroll buffer</strong> &#x2014; Terminal output isn&apos;t preserved</li><li><strong>Environment variables</strong> &#x2014; Pane-specific env vars aren&apos;t saved</li></ul><p>If you need those features, look into tmux-resurrect with tmux-continuum. But for most workflows, saving the structure and directories is 90% of the value.</p><hr><h2 id="the-code">The Code</h2><p>The entire toolkit is about 200 lines of bash. No magic, no complexity. The format is simple enough that you could write your own layout files by hand if you wanted to.</p><p><strong>Layout file format:</strong></p><pre><code># tmux layout: &lt;name&gt;
# saved: &lt;date&gt;
# session: &lt;original-session-name&gt;

WINDOW|&lt;index&gt;|&lt;name&gt;|&lt;layout-string&gt;
PANE|&lt;window-index&gt;|&lt;pane-index&gt;|&lt;path&gt;|&lt;command&gt;
</code></pre><p>Feel free to version control your layouts, share them, or build tooling on top.</p><hr><h2 id="conclusion">Conclusion</h2><p>Terminal workflows shouldn&apos;t be ephemeral. With <code>tmux-save</code>, <code>tmux-load</code>, and <code>tmux-list</code>, you can:</p><ul><li>Capture your perfect workspace in seconds</li><li>Restore it anywhere, anytime</li><li>Manage multiple configurations effortlessly</li></ul><p>No plugins. No dependencies. Just three scripts and you&apos;re done.</p><p>Stop rebuilding your tmux setup. Start saving it.</p><hr><h2 id="quick-reference">Quick Reference</h2><pre><code class="language-bash"># Save current session
tmux-save &lt;name&gt;

# Load a saved layout (interactive)
tmux-load

# Load a specific layout
tmux-load &lt;name&gt;

# List all layouts
tmux-list

# Delete a layout
tmux-list -d
</code></pre><p>Layouts are stored in: <code>~/.tmux-layouts/</code></p>]]></content:encoded></item><item><title><![CDATA[Becoming Japanese for a Day: My Tokyo Kimono Experience]]></title><description><![CDATA[<hr><h2 id="the-threshold">The Threshold</h2><p>There&apos;s a red gate in Tokyo that changed how I see clothes.</p><p>Not a metaphorical gate. An actual one&#x2014;a <em>torii</em>, vermillion and weathered, standing in a bamboo garden where I stood wrapped in fabric that took 400 years to perfect.</p><p>The Japanese call the</p>]]></description><link>https://www.codeandcompass.net/becoming-japanese-for-a-day-my-tokyo-kimono-experience/</link><guid isPermaLink="false">69352fa9e297b50001f7178d</guid><dc:creator><![CDATA[Kevin Edraki]]></dc:creator><pubDate>Sun, 07 Dec 2025 07:47:53 GMT</pubDate><media:content url="https://www.codeandcompass.net/content/images/2026/01/IMG_4978.jpg" medium="image"/><content:encoded><![CDATA[<hr><h2 id="the-threshold">The Threshold</h2><img src="https://www.codeandcompass.net/content/images/2026/01/IMG_4978.jpg" alt="Becoming Japanese for a Day: My Tokyo Kimono Experience"><p>There&apos;s a red gate in Tokyo that changed how I see clothes.</p><p>Not a metaphorical gate. An actual one&#x2014;a <em>torii</em>, vermillion and weathered, standing in a bamboo garden where I stood wrapped in fabric that took 400 years to perfect.</p><p>The Japanese call the color of these gates <em>shu</em>. The rest of us call it red. But <em>shu</em> isn&apos;t just a color. It&apos;s protection. It&apos;s vitality. It&apos;s the boundary between the world you came from and something else entirely.</p><p>I walked through one wearing a stranger&apos;s history on my shoulders.</p><hr><h2 id="what-youre-actually-wearing">What You&apos;re Actually Wearing</h2><p>Here&apos;s what nobody tells you about putting on a kimono:</p><p><strong>It takes 12 separate pieces.</strong></p><p>There&apos;s the <em>hadajuban</em> (an undergarment to absorb sweat), the <em>nagajuban</em> (a longer under-robe), the kimono itself, the <em>obi</em> (that impossibly wide belt), <em>obi-ita</em> (a stiffening board), <em>koshi-himo</em> (waist cords), <em>date-jime</em> (another belt under the obi), <em>tabi</em> (split-toe socks), and <em>zori</em> (sandals). And that&apos;s the simplified version.</p><p>The staff dressed me in layers I couldn&apos;t name while I stood there like a mannequin learning what it means to be wrapped in intention.</p><p>My outfit was a <em>haori</em>&#x2014;a hip-length jacket&#x2014;worn over a patterned kimono. Grey, with wagon-wheel motifs called <em>waguruma</em>. The haori dates back to the Sengoku period (1467-1615), when samurai wore them over armor in winter campaigns. By the Edo period, wealthy merchants began wearing haori with deliberately plain exteriors but lavishly decorated linings&#x2014;a quiet rebellion against laws that restricted their clothing based on social class.</p><p>They could afford silk. They just couldn&apos;t show it.</p><p><strong>Historical fact:</strong> The haori I wore would have been illegal for a common citizen to wear during the Tokugawa shogunate. Only samurai and nobility had the right to this garment. Later, geisha in Tokyo&apos;s Fukagawa district broke this tradition around 1800, wearing haori as a fashion statement&#x2014;and started a trend that took 130 years to fully catch on with women.</p><hr><h2 id="the-weight-of-silk">The Weight of Silk</h2><p>The women with me wore something heavier.</p><p>Traditional furisode-style kimonos with patterns that seemed to move even when they stood still&#x2014;yellow flowers against indigo, white magnolias tumbling across navy blue. The <em>obi</em> alone&#x2014;that wide decorative belt&#x2014;can weigh several pounds when made of proper silk brocade. Some formal obi are 4 meters long.</p><p>What strikes you is the posture it creates. You don&apos;t slouch in a kimono. You can&apos;t. The structure holds you upright, transforms your walk into something more deliberate. There&apos;s a reason the Japanese word <em>kikonashi</em>&#x2014;meaning the way one wears clothes&#x2014;is considered an art form.</p><p><strong>The price of tradition:</strong> An authentic silk kimono with custom dyeing (called <em>yuzen</em>, a technique from the Edo period involving hand-painted resist dyeing) can cost anywhere from $10,000 to over $100,000. What we wore were rentals&#x2014;beautiful, but democratized versions of something that was once reserved for weddings, tea ceremonies, and the aristocracy.</p><hr><h2 id="a-garden-that-isnt-a-garden">A Garden That Isn&apos;t a Garden</h2><p>The photo location was a bamboo garden in Tokyo&#x2014;compressed, curated, perfect.</p><p>It had everything: <em>moso</em> bamboo arranged in a half-fence pattern, a sculpted pine (probably decades old), stone paths, and that red torii marking the entrance to a space that doesn&apos;t technically exist.</p><p>Here&apos;s what I mean: Japanese garden design is based on <em>shakkei</em>&#x2014;&quot;borrowed scenery.&quot; The idea that a garden should frame views beyond its boundaries, incorporating distant mountains or forests as part of its composition. But in urban Tokyo, there are no distant mountains. So these rental studios create contained worlds&#x2014;gardens that exist entirely for the photograph, for the memory, for the three hours you spend pretending you&apos;re somewhere that time forgot.</p><p>The torii in this garden served no religious function. It was symbol stripped of context, made beautiful, made consumable.</p><p>And I don&apos;t know how to feel about that.</p><hr><h2 id="why-we-do-this">Why We Do This</h2><p>More than 3 million tourists rent kimonos in Kyoto alone each year. Tokyo&apos;s numbers are harder to pin down, but the industry has exploded&#x2014;driven partly by Instagram, partly by a genuine desire to touch something older than ourselves.</p><p>The Japanese have a word: <em>furugi</em>&#x2014;old clothes. But more specifically, they have a phrase: <em>mottainai</em>&#x2014;roughly, &quot;what a waste.&quot; It captures the sadness of throwing away something that still has value. Kimono rental shops are, in a way, fighting <em>mottainai</em>. They&apos;re keeping these garments in circulation, on bodies, in photographs, in some version of living use.</p><p>Is it authentic? Probably not.</p><p>Is it appropriation? The staff seemed genuinely delighted to dress us. They adjusted my collar three times to get it right. They taught me how to hold my hands (together, in front, fingers hidden). They told me I looked <em>kakkoi</em>&#x2014;cool.</p><p>What I think is this: every tradition was once an innovation. The haori was once controversial. The red torii was borrowed from Buddhism, which borrowed it from Indian torana gates. Culture is a river. You can step into it or watch from the shore.</p><hr><h2 id="the-photo">The Photo</h2><p>The three of us stood there&#x2014;me in grey, them in explosions of color&#x2014;inside a room with <em>tatami</em> mats and gold-threaded screens.</p><p>For a moment, we weren&apos;t tourists. We weren&apos;t playing dress-up. We were people standing in clothes that other people made with their hands, using techniques passed down through generations, wearing patterns that meant something to someone, somewhere, sometime.</p><p>Then we took our phones out and snapped the shot.</p><hr><h2 id="what-stays">What Stays</h2><p>I kept the <em>tabi</em> socks. Split-toed, white cotton, designed for a sandal I&apos;ll never wear again.</p><p>They sit in my drawer now, next to regular socks. Every few months I see them and remember:</p><p>The weight of the obi.<br>The sound of <em>zori</em> on stone.<br>The way the bamboo smelled.<br>The red gate marking passage into somewhere I couldn&apos;t stay.</p><p><em>Torii</em> translates literally as &quot;bird perch.&quot; According to Japanese mythology, the first torii was built to lure the sun goddess Amaterasu out of a cave where she&apos;d been hiding&#x2014;they placed roosters on a wooden perch, hoping their crowing would draw her curiosity.</p><p>It worked. She came out. The world had light again.</p><p>Some gates are meant to be walked through. Others are meant to remind you that the crossing was possible.</p><hr><p><em>Tokyo, September 2024</em></p><hr><h3 id="quick-facts">Quick Facts</h3><ul><li><strong>Kimono rental cost:</strong> &#xA5;3,000-10,000 (~$20-70 USD) for basic packages; premium experiences with photography can exceed &#xA5;50,000</li><li><strong>Dressing time:</strong> 20-45 minutes depending on complexity</li><li><strong>Haori history:</strong> Originally samurai and nobility-only garment from 1500s; women began wearing them around 1800</li><li><strong>Torii gates in Japan:</strong> Approximately 90,000+ across Shinto shrines nationwide</li><li><strong>Torii color meaning:</strong> Red (vermillion) represents vitality, protection from evil spirits, and the sun&apos;s life-giving energy</li></ul><figure class="kg-card kg-gallery-card kg-width-wide"><div class="kg-gallery-container"><div class="kg-gallery-row"><div class="kg-gallery-image"><img src="https://www.codeandcompass.net/content/images/2025/12/IMG_5041_rotated.jpg" width="2000" height="2667" loading="lazy" alt="Becoming Japanese for a Day: My Tokyo Kimono Experience" srcset="https://www.codeandcompass.net/content/images/size/w600/2025/12/IMG_5041_rotated.jpg 600w, https://www.codeandcompass.net/content/images/size/w1000/2025/12/IMG_5041_rotated.jpg 1000w, https://www.codeandcompass.net/content/images/size/w1600/2025/12/IMG_5041_rotated.jpg 1600w, https://www.codeandcompass.net/content/images/size/w2400/2025/12/IMG_5041_rotated.jpg 2400w" sizes="(min-width: 1200px) 1200px"></div></div></div></figure>]]></content:encoded></item><item><title><![CDATA[FastAPI Blueprint 03: Core FastAPI Structure]]></title><description><![CDATA[Learn how to structure FastAPI apps for flexibility so you can plug in new features without breaking the system.]]></description><link>https://www.codeandcompass.net/fastapi-blueprint-03-core-fastapi-structure/</link><guid isPermaLink="false">689ace43b601070001f492d4</guid><category><![CDATA[FastAPIBlueprint]]></category><category><![CDATA[FastAPI]]></category><dc:creator><![CDATA[Kevin Edraki]]></dc:creator><pubDate>Tue, 12 Aug 2025 05:32:24 GMT</pubDate><media:content url="https://www.codeandcompass.net/content/images/2025/08/ChatGPT-Image-Aug-11--2025--10_58_30-PM.png" medium="image"/><content:encoded><![CDATA[<hr><h2 id="what-you%E2%80%99ll-learn-in-this-chapter"><strong>What You&#x2019;ll Learn in This Chapter</strong></h2><img src="https://www.codeandcompass.net/content/images/2025/08/ChatGPT-Image-Aug-11--2025--10_58_30-PM.png" alt="FastAPI Blueprint 03: Core FastAPI Structure"><p>By now, you have:</p><ul><li>A working FastAPI app scaffold from Lesson 1</li><li>A clean environment and Docker setup from Lesson 2</li></ul><p>In this chapter, we&#x2019;ll:</p><ul><li>Structure the app for modularity</li><li>Organize routes into versioned modules</li><li>Implement shared dependencies</li><li>Prepare the app for future features (auth, databases, Kafka) without rewriting core logic</li></ul><p>By the end, you&#x2019;ll have a <strong>production-ready FastAPI skeleton</strong> that&#x2019;s easy to extend for any microservice.</p><hr><h2 id="why-structure-matters-in-fastapi"><strong>Why Structure Matters in FastAPI</strong></h2><p>FastAPI is flexible, but if you throw all routes into <code>main.py</code>, your project will:</p><ul><li>Get messy quickly</li><li>Make it harder to reuse code</li><li>Slow down onboarding for new devs</li></ul><p>We&#x2019;ll fix that by:</p><ul><li>Grouping routes into <strong>versioned modules</strong></li><li>Using <strong>dependency injection</strong> for reusability</li><li>Keeping <code>main.py</code> minimal (just an app factory)</li></ul><hr><h2 id="folder-structure-update"><strong>Folder Structure Update</strong></h2><p>We&#x2019;ll refine from Lesson 2 to introduce domain-driven structure:</p><pre><code>app/
&#x251C;&#x2500; api/
&#x2502;  &#x251C;&#x2500; deps.py                 # Shared dependencies
&#x2502;  &#x251C;&#x2500; v1/
&#x2502;  &#x2502;  &#x251C;&#x2500; __init__.py
&#x2502;  &#x2502;  &#x251C;&#x2500; routes_health.py     # System health/version
&#x2502;  &#x2502;  &#x251C;&#x2500; routes_users.py      # Example resource
&#x2502;  &#x2502;  &#x2514;&#x2500; routes_items.py      # Example resource
&#x251C;&#x2500; core/
&#x2502;  &#x251C;&#x2500; config.py
&#x2502;  &#x251C;&#x2500; logging.py
&#x2502;  &#x251C;&#x2500; version.py
&#x251C;&#x2500; domain/                    # Business logic
&#x2502;  &#x251C;&#x2500; users.py
&#x2502;  &#x2514;&#x2500; items.py
&#x251C;&#x2500; ports/
&#x251C;&#x2500; adapters/
&#x2514;&#x2500; main.py
</code></pre><hr><h2 id="step-by-step-build"><strong>Step-by-Step Build</strong></h2><h3 id="1-shared-dependencies"><strong>1. Shared Dependencies</strong></h3><p>Dependencies help avoid repetitive code like DB connections or auth checks.</p><p><strong>app/api/deps.py</strong></p><pre><code class="language-python">from fastapi import Depends

# Example: current user dependency placeholder
def get_current_user():
    return {&quot;username&quot;: &quot;demo_user&quot;}
</code></pre><hr><h3 id="2-routes-by-feature"><strong>2. Routes by Feature</strong></h3><p>We&#x2019;ll create an example resource to show modularity.</p><p><strong>app/api/v1/routes_users.py</strong></p><pre><code class="language-python">from fastapi import APIRouter, Depends
from app.api.deps import get_current_user

router = APIRouter()

@router.get(&quot;/users/me&quot;, tags=[&quot;users&quot;])
async def read_current_user(current_user: dict = Depends(get_current_user)):
    return current_user
</code></pre><p><strong>app/api/v1/routes_items.py</strong></p><pre><code class="language-python">from fastapi import APIRouter

router = APIRouter()

@router.get(&quot;/items&quot;, tags=[&quot;items&quot;])
async def list_items():
    return [{&quot;id&quot;: 1, &quot;name&quot;: &quot;Example Item&quot;}]
</code></pre><hr><h3 id="3-api-router-assembly"><strong>3. API Router Assembly</strong></h3><p>We&#x2019;ll gather all v1 routes in a single file for cleaner <code>main.py</code>.</p><p><strong>app/api/v1/init.py</strong></p><pre><code class="language-python">from fastapi import APIRouter
from .routes_health import router as health_router
from .routes_users import router as users_router
from .routes_items import router as items_router

api_router = APIRouter()
api_router.include_router(health_router)
api_router.include_router(users_router)
api_router.include_router(items_router)
</code></pre><hr><h3 id="4-clean-mainpy"><strong>4. Clean <code>main.py</code></strong></h3><p>Your <code>main.py</code> now just wires settings, logging, and routes.</p><p><strong>app/main.py</strong></p><pre><code class="language-python">from fastapi import FastAPI
from app.core.config import settings
from app.core.logging import setup_logging
from app.api.v1 import api_router

def create_app() -&gt; FastAPI:
    setup_logging(settings.log_level)

    app = FastAPI(
        title=settings.app_name,
        version=&quot;0.1.0&quot;,
        docs_url=&quot;/docs&quot;,
        redoc_url=&quot;/redoc&quot;,
    )

    app.include_router(api_router, prefix=settings.api_v1_str)

    return app

app = create_app()
</code></pre><hr><h3 id="5-test-it"><strong>5. Test It</strong></h3><p>Run locally:</p><pre><code class="language-bash">poetry run uvicorn app.main:app --reload
</code></pre><p>Test endpoints:</p><pre><code class="language-bash">curl http://127.0.0.1:8000/api/v1/health
curl http://127.0.0.1:8000/api/v1/users/me
curl http://127.0.0.1:8000/api/v1/items
</code></pre><hr><h2 id="checklist-for-chapter-3-completion"><strong>Checklist for Chapter 3 Completion</strong></h2><ul><li><code>api/v1</code> folder contains separate route files for each feature</li><li><code>deps.py</code> created for shared dependencies</li><li><code>api_router</code> collects all routes into one import for <code>main.py</code></li><li><code>main.py</code> contains no business logic &#x2014; just app wiring</li><li>App runs and all endpoints respond correctly</li></ul><hr><h2 id="next-up-%E2%80%93-lesson-4"><strong>Next Up &#x2013; Lesson 4</strong></h2><p>In <strong>FastAPI Blueprint 04: Database Layer Abstraction</strong>, we&#x2019;ll:</p><ul><li>Implement a generic database port interface</li><li>Create adapters for PostgreSQL, MongoDB, and Redis</li><li>Show how to switch DBs via config, even run multiple in one service</li></ul>]]></content:encoded></item><item><title><![CDATA[FastAPI Blueprint 01: Intro and Architecture Overview]]></title><description><![CDATA[Kick off the series by understanding what we’re building, why microservices matter, and how each component fits into the bigger picture.
]]></description><link>https://www.codeandcompass.net/intro-architecture-overview/</link><guid isPermaLink="false">6896a3c843587100019cf2d8</guid><category><![CDATA[FastAPIBlueprint]]></category><category><![CDATA[Concepts]]></category><dc:creator><![CDATA[Kevin Edraki]]></dc:creator><pubDate>Tue, 12 Aug 2025 05:16:19 GMT</pubDate><media:content url="https://www.codeandcompass.net/content/images/2025/08/ChatGPT-Image-Aug-8--2025--06_42_20-PM.png" medium="image"/><content:encoded><![CDATA[<img src="https://www.codeandcompass.net/content/images/2025/08/ChatGPT-Image-Aug-8--2025--06_42_20-PM.png" alt="FastAPI Blueprint 01: Intro and Architecture Overview"><p></p><h2 id="what-you-will-learn">What you will learn</h2><ul><li>What this series builds and why</li><li>How the template is structured so you can reuse it for auth, sales, reports, or social feeds</li><li>The roles of PostgreSQL, MongoDB, Redis, and Kafka in this architecture</li><li>A ready starter project with FastAPI, typed settings, basic logging, versioned routing, and a health endpoint</li></ul><h2 id="why-microservices-for-this-template">Why microservices for this template</h2><p>We want a template that can be reused for different domains without copy paste chaos. A microservice works well when you keep boundaries clear and communication explicit. You get independent deploys, tech freedom per service, and the ability to scale hot paths only.</p><p>Tradeoffs to accept:</p><ul><li>More moving parts</li><li>You need good observability and testing</li><li>Data consistency requires care</li></ul><h2 id="high-level-architecture">High level architecture</h2><pre><code>[Client or Another Service]
            |
        HTTP/REST
            v
     +-----------------+
     |  FastAPI App    |
     |  Routers        |
     |  DI glue        |
     +--------+--------+
              | Service layer calls ports
    +---------+-----------------------------+
    |                                       |
[Database Ports]                      [Messaging Port]
  |        |        |                       |
Postgres  MongoDB  Redis                 Kafka Client

</code></pre><ul><li><strong>Routers</strong> expose HTTP endpoints</li><li><strong>Service layer</strong> holds use cases and rules</li><li><strong>Ports</strong> define interfaces for infrastructure</li><li><strong>Adapters</strong> implement those ports for Postgres, MongoDB, Redis, Kafka</li></ul><p>This is the clean architecture vibe. Business code depends on interfaces, not concrete tech. You can swap a datastore or add one without rewriting your core logic.</p><h2 id="when-to-use-each-datastore">When to use each datastore</h2><ul><li><strong>PostgreSQL</strong>: transactional data, strong consistency, relations</li><li><strong>MongoDB</strong>: flexible documents, analytics-style reads, content feeds</li><li><strong>Redis</strong>: caching, sessions, rate limits, short lived data</li><li><strong>Kafka</strong>: events between services, decoupling write and read models</li></ul><p>You can use more than one in the same service. Example: write orders in Postgres, cache summaries in Redis, publish order events to Kafka.</p><h2 id="what-we-build-today">What we build today</h2><ul><li>Project scaffold that will scale for the rest of the series</li><li>App factory with typed settings</li><li>Versioned API prefix</li><li>Health and version endpoints</li><li>Logging that plays nice in containers</li><li>Placeholders for databases and Kafka that we will fill later</li></ul><hr><h1 id="code-for-step-1">Code for Step 1</h1><blockquote>Folder tree</blockquote><pre><code>fastapi-blueprint/
&#x251C;&#x2500; pyproject.toml                 # or requirements.txt if you prefer pip
&#x251C;&#x2500; README.md
&#x251C;&#x2500; .env.example
&#x251C;&#x2500; .gitignore
&#x251C;&#x2500; Dockerfile
&#x251C;&#x2500; app/
&#x2502;  &#x251C;&#x2500; __init__.py
&#x2502;  &#x251C;&#x2500; core/
&#x2502;  &#x2502;  &#x251C;&#x2500; config.py               # Pydantic settings
&#x2502;  &#x2502;  &#x251C;&#x2500; logging.py              # base logging setup
&#x2502;  &#x2502;  &#x2514;&#x2500; version.py              # service version
&#x2502;  &#x251C;&#x2500; main.py                    # app factory
&#x2502;  &#x251C;&#x2500; api/
&#x2502;  &#x2502;  &#x251C;&#x2500; __init__.py
&#x2502;  &#x2502;  &#x251C;&#x2500; deps.py                 # shared dependencies
&#x2502;  &#x2502;  &#x251C;&#x2500; v1/
&#x2502;  &#x2502;  &#x2502;  &#x251C;&#x2500; __init__.py
&#x2502;  &#x2502;  &#x2502;  &#x2514;&#x2500; routes_health.py     # /health, /version
&#x2502;  &#x251C;&#x2500; domain/                    # business rules later
&#x2502;  &#x2502;  &#x2514;&#x2500; __init__.py
&#x2502;  &#x251C;&#x2500; ports/                     # interfaces
&#x2502;  &#x2502;  &#x251C;&#x2500; __init__.py
&#x2502;  &#x2502;  &#x251C;&#x2500; db_port.py
&#x2502;  &#x2502;  &#x2514;&#x2500; messaging_port.py
&#x2502;  &#x2514;&#x2500; adapters/                  # implementations later
&#x2502;     &#x251C;&#x2500; __init__.py
&#x2502;     &#x251C;&#x2500; postgres_adapter.py
&#x2502;     &#x251C;&#x2500; mongodb_adapter.py
&#x2502;     &#x251C;&#x2500; redis_adapter.py
&#x2502;     &#x2514;&#x2500; kafka_adapter.py
&#x2514;&#x2500; tests/
   &#x2514;&#x2500; test_health.py
</code></pre><h2 id="pyprojecttoml-poetry">pyproject.toml (Poetry)</h2><p>If you prefer pip, I include a <code>requirements.txt</code> right after.</p><pre><code class="language-toml">[tool.poetry]
name = &quot;fastapi-blueprint&quot;
version = &quot;0.1.0&quot;
description = &quot;FastAPI microservice template - Step 1 scaffold&quot;
authors = [&quot;You &lt;you@example.com&gt;&quot;]
readme = &quot;README.md&quot;
packages = [{ include = &quot;app&quot; }]

[tool.poetry.dependencies]
python = &quot;^3.12&quot;
fastapi = &quot;^0.115.0&quot;
uvicorn = { extras = [&quot;standard&quot;], version = &quot;^0.30.0&quot; }
pydantic-settings = &quot;^2.4.0&quot;

[tool.poetry.group.dev.dependencies]
pytest = &quot;^8.3.0&quot;
httpx = &quot;^0.27.0&quot;
pytest-asyncio = &quot;^0.23.7&quot;

[build-system]
requires = [&quot;poetry-core&quot;]
build-backend = &quot;poetry.core.masonry.api&quot;
</code></pre><h2 id="requirementstxt-pip-alternative">requirements.txt (pip alternative)</h2><pre><code>fastapi==0.115.0
uvicorn[standard]==0.30.0
pydantic-settings==2.4.0
pytest==8.3.0
httpx==0.27.0
pytest-asyncio==0.23.7
</code></pre><h2 id="envexample">.env.example</h2><pre><code>APP_NAME=fastapi-blueprint
APP_ENV=local
API_V1_STR=/api/v1
LOG_LEVEL=INFO
</code></pre><h2 id="appcoreversionpy">app/core/version.py</h2><pre><code class="language-python">SERVICE_NAME = &quot;fastapi-blueprint&quot;
SERVICE_VERSION = &quot;0.1.0&quot;
</code></pre><h2 id="appcoreconfigpy">app/core/config.py</h2><pre><code class="language-python">from pydantic_settings import BaseSettings, SettingsConfigDict

class Settings(BaseSettings):
    app_name: str = &quot;fastapi-blueprint&quot;
    app_env: str = &quot;local&quot;
    api_v1_str: str = &quot;/api/v1&quot;
    log_level: str = &quot;INFO&quot;

    model_config = SettingsConfigDict(env_file=&quot;.env&quot;, env_file_encoding=&quot;utf-8&quot;)

settings = Settings()
</code></pre><h2 id="appcoreloggingpy">app/core/logging.py</h2><pre><code class="language-python">import logging
import sys

def setup_logging(level: str = &quot;INFO&quot;) -&gt; None:
    handler = logging.StreamHandler(sys.stdout)
    fmt = &quot;%(asctime)s | %(levelname)s | %(name)s | %(message)s&quot;
    handler.setFormatter(logging.Formatter(fmt))
    root = logging.getLogger()
    root.handlers.clear()
    root.addHandler(handler)
    root.setLevel(level.upper())
</code></pre><h2 id="appapiv1routeshealthpy">app/api/v1/routes_health.py</h2><pre><code class="language-python">from fastapi import APIRouter
from app.core.version import SERVICE_NAME, SERVICE_VERSION

router = APIRouter()

@router.get(&quot;/health&quot;, tags=[&quot;system&quot;])
async def health() -&gt; dict:
    return {&quot;status&quot;: &quot;ok&quot;}

@router.get(&quot;/version&quot;, tags=[&quot;system&quot;])
async def version() -&gt; dict:
    return {&quot;service&quot;: SERVICE_NAME, &quot;version&quot;: SERVICE_VERSION}
</code></pre><h2 id="appapidepspy">app/api/deps.py</h2><pre><code class="language-python"># Shared dependencies live here. Example:
# from fastapi import Depends
# def get_current_user(...) -&gt; User: ...
</code></pre><h2 id="appportsdbportpy">app/ports/db_port.py</h2><pre><code class="language-python">from typing import Protocol, Any

class DatabasePort(Protocol):
    async def connect(self) -&gt; None: ...
    async def close(self) -&gt; None: ...
    # You can add generic methods or keep it minimal for now
</code></pre><h2 id="appportsmessagingportpy">app/ports/messaging_port.py</h2><pre><code class="language-python">from typing import Protocol, Mapping, Any

class MessagingPort(Protocol):
    async def publish(self, topic: str, key: bytes | None, value: bytes, headers: Mapping[str, bytes] | None = None) -&gt; None: ...
    async def start_consumer(self) -&gt; None: ...
    async def stop_consumer(self) -&gt; None: ...
</code></pre><h2 id="appadapters-placeholders">app/adapters placeholders</h2><pre><code class="language-python"># app/adapters/postgres_adapter.py
# Implement DatabasePort for Postgres in a later lesson
</code></pre><pre><code class="language-python"># app/adapters/mongodb_adapter.py
# Implement DatabasePort for MongoDB in a later lesson
</code></pre><pre><code class="language-python"># app/adapters/redis_adapter.py
# Implement DatabasePort for Redis in a later lesson
</code></pre><pre><code class="language-python"># app/adapters/kafka_adapter.py
# Implement MessagingPort for Kafka in a later lesson
</code></pre><h2 id="appmainpy">app/main.py</h2><pre><code class="language-python">from fastapi import FastAPI
from app.core.config import settings
from app.core.logging import setup_logging
from app.api.v1.routes_health import router as health_router

def create_app() -&gt; FastAPI:
    setup_logging(settings.log_level)

    app = FastAPI(
        title=settings.app_name,
        version=&quot;0.1.0&quot;,
        docs_url=&quot;/docs&quot;,
        redoc_url=&quot;/redoc&quot;,
    )

    # Versioned API
    app.include_router(health_router, prefix=settings.api_v1_str)

    @app.get(&quot;/&quot;, tags=[&quot;system&quot;])
    async def root():
        return {&quot;message&quot;: &quot;Service running&quot;, &quot;name&quot;: settings.app_name}

    return app

app = create_app()
</code></pre><h2 id="teststesthealthpy">tests/test_health.py</h2><pre><code class="language-python">import pytest
from httpx import AsyncClient
from app.main import create_app

@pytest.mark.asyncio
async def test_health():
    app = create_app()
    async with AsyncClient(app=app, base_url=&quot;http://test&quot;) as ac:
        resp = await ac.get(&quot;/api/v1/health&quot;)
    assert resp.status_code == 200
    assert resp.json()[&quot;status&quot;] == &quot;ok&quot;
</code></pre><h2 id="dockerfile">Dockerfile</h2><pre><code class="language-dockerfile">FROM python:3.12-slim

WORKDIR /app

# If you use Poetry
RUN pip install --no-cache-dir poetry==1.8.3
COPY pyproject.toml poetry.lock* /app/
RUN poetry config virtualenvs.create false \
  &amp;&amp; poetry install --no-interaction --no-ansi

# If you use pip instead, comment out the Poetry block above and:
# COPY requirements.txt /app/
# RUN pip install --no-cache-dir -r requirements.txt

COPY . /app

ENV PYTHONUNBUFFERED=1
ENV PORT=8080

CMD [&quot;uvicorn&quot;, &quot;app.main:app&quot;, &quot;--host&quot;, &quot;0.0.0.0&quot;, &quot;--port&quot;, &quot;8080&quot;]
</code></pre><h2 id="quick-start">Quick start</h2><p>Using Poetry:</p><pre><code class="language-bash">poetry install
cp .env.example .env
poetry run uvicorn app.main:app --reload
</code></pre><p>Using pip:</p><pre><code class="language-bash">python -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
cp .env.example .env
uvicorn app.main:app --reload
</code></pre><p>Test it:</p><pre><code class="language-bash">curl http://127.0.0.1:8000/api/v1/health
curl http://127.0.0.1:8000/api/v1/version
</code></pre><p>You now have a clean skeleton that we can extend in the next lessons. Step 2 will set up the full project environment, refine config handling, and lock in the folder conventions so adding databases and Kafka later feels natural.</p><p>Want me to drop this into a Ghost-ready post file and a GitHub README, or generate a zip so you can download the scaffold in one shot?</p>]]></content:encoded></item><item><title><![CDATA[FastAPI Blueprint 02: Project Setup & Environment]]></title><description><![CDATA[Set up a clean, scalable FastAPI project structure that’s ready to handle anything from authentication to analytics.
]]></description><link>https://www.codeandcompass.net/fastapi-blueprint-02-project-setup-environment/</link><guid isPermaLink="false">6896ab1043587100019cf2ee</guid><category><![CDATA[FastAPIBlueprint]]></category><category><![CDATA[Setup]]></category><dc:creator><![CDATA[Kevin Edraki]]></dc:creator><pubDate>Tue, 12 Aug 2025 04:41:24 GMT</pubDate><media:content url="https://www.codeandcompass.net/content/images/2025/08/ChatGPT-Image-Aug-8--2025--07_02_47-PM.png" medium="image"/><content:encoded><![CDATA[<hr><h2 id="what-you%E2%80%99ll-learn-in-this-chapter"><strong>What You&#x2019;ll Learn in This Chapter</strong></h2><img src="https://www.codeandcompass.net/content/images/2025/08/ChatGPT-Image-Aug-8--2025--07_02_47-PM.png" alt="FastAPI Blueprint 02: Project Setup &amp; Environment"><p>In the first chapter, we mapped out the architecture for our FastAPI microservice template.<br>Now it&#x2019;s time to lay the foundation &#x2014; the project&#x2019;s environment and folder structure &#x2014; so everything is clean, consistent, and easy to extend.</p><p>By the end of this chapter, you&#x2019;ll have:</p><ul><li>A consistent folder structure for your service</li><li>Poetry or pip configured with dependencies</li><li>Environment variable management with <code>.env</code> and Pydantic Settings</li><li>A running FastAPI app with a <code>/health</code> endpoint from Step 1</li><li>Ready-to-use Docker setup for containerized development</li></ul><hr><h2 id="why-environment-setup-matters"><strong>Why Environment Setup Matters</strong></h2><p>A microservice template isn&#x2019;t just about code &#x2014; it&#x2019;s about <strong>maintainability</strong>.<br>A good setup should let you:</p><ul><li>Onboard a new developer in minutes</li><li>Swap databases or message brokers without changing core logic</li><li>Keep configs out of your codebase and in <code>.env</code> files or secret managers</li><li>Run locally with minimal fuss, whether via <code>uvicorn</code> or Docker</li></ul><p>Skipping this step means your &#x201C;template&#x201D; quickly turns into spaghetti that&#x2019;s hard to reuse.</p><hr><h2 id="folder-structure-we%E2%80%99ll-use"><strong>Folder Structure We&#x2019;ll Use</strong></h2><p>We&#x2019;ll extend the structure from Step 1:</p><pre><code>fastapi-blueprint/
&#x251C;&#x2500; pyproject.toml / requirements.txt
&#x251C;&#x2500; .env.example
&#x251C;&#x2500; Dockerfile
&#x251C;&#x2500; docker-compose.yml   &lt;-- added here for local DB/message broker runs
&#x251C;&#x2500; app/
&#x2502;  &#x251C;&#x2500; core/             &lt;-- config, logging, constants
&#x2502;  &#x251C;&#x2500; api/              &lt;-- versioned routes
&#x2502;  &#x251C;&#x2500; domain/           &lt;-- business logic
&#x2502;  &#x251C;&#x2500; ports/            &lt;-- interfaces
&#x2502;  &#x251C;&#x2500; adapters/         &lt;-- implementations
&#x2502;  &#x2514;&#x2500; main.py
&#x2514;&#x2500; tests/
</code></pre><hr><h2 id="step-by-step-setup"><strong>Step-by-Step Setup</strong></h2><h3 id="1-choose-dependency-manager"><strong>1. Choose Dependency Manager</strong></h3><p>We&#x2019;ll use <strong>Poetry</strong> in this series, but I&#x2019;ll include pip equivalents.<br>Poetry gives you lock files and a cleaner dependency setup.</p><p><strong>Poetry:</strong></p><pre><code class="language-bash">pip install poetry
poetry init
poetry add fastapi uvicorn[standard] pydantic-settings
poetry add --group dev pytest httpx pytest-asyncio
</code></pre><p><strong>Pip:</strong></p><pre><code class="language-bash">python -m venv .venv
source .venv/bin/activate
pip install fastapi uvicorn[standard] pydantic-settings pytest httpx pytest-asyncio
pip freeze &gt; requirements.txt
</code></pre><hr><h3 id="2-create-env-and-envexample"><strong>2. Create <code>.env</code> and <code>.env.example</code></strong></h3><p>We&#x2019;ll define default configs here:</p><p><code>.env.example</code></p><pre><code class="language-env">APP_NAME=fastapi-blueprint
APP_ENV=local
API_V1_STR=/api/v1
LOG_LEVEL=INFO
</code></pre><p>Copy it to <code>.env</code> for local runs:</p><pre><code class="language-bash">cp .env.example .env
</code></pre><hr><h3 id="3-configure-pydantic-settings"><strong>3. Configure Pydantic Settings</strong></h3><p><strong>app/core/config.py</strong></p><pre><code class="language-python">from pydantic_settings import BaseSettings, SettingsConfigDict

class Settings(BaseSettings):
    app_name: str
    app_env: str
    api_v1_str: str
    log_level: str = &quot;INFO&quot;

    model_config = SettingsConfigDict(env_file=&quot;.env&quot;, env_file_encoding=&quot;utf-8&quot;)

settings = Settings()
</code></pre><hr><h3 id="4-update-main-app-factory"><strong>4. Update Main App Factory</strong></h3><p>We&#x2019;ll extend Step 1&#x2019;s <code>create_app()</code> to load settings automatically:</p><p><strong>app/main.py</strong></p><pre><code class="language-python">from fastapi import FastAPI
from app.core.config import settings
from app.core.logging import setup_logging
from app.api.v1.routes_health import router as health_router

def create_app() -&gt; FastAPI:
    setup_logging(settings.log_level)

    app = FastAPI(
        title=settings.app_name,
        version=&quot;0.1.0&quot;,
        docs_url=&quot;/docs&quot;,
        redoc_url=&quot;/redoc&quot;,
    )

    app.include_router(health_router, prefix=settings.api_v1_str)

    return app

app = create_app()
</code></pre><hr><h3 id="5-add-docker-support"><strong>5. Add Docker Support</strong></h3><p><strong>Dockerfile</strong> (from Step 1) stays mostly the same, but now we add Compose.</p><p><strong>docker-compose.yml</strong></p><pre><code class="language-yaml">version: &apos;3.9&apos;

services:
  app:
    build: .
    container_name: fastapi_blueprint
    ports:
      - &quot;8000:8000&quot;
    env_file: .env
    volumes:
      - .:/app
    command: uvicorn app.main:app --host 0.0.0.0 --port 8000 --reload
</code></pre><hr><h3 id="6-test-the-setup"><strong>6. Test the Setup</strong></h3><p>Run locally:</p><pre><code class="language-bash">poetry run uvicorn app.main:app --reload
</code></pre><p>or via Docker:</p><pre><code class="language-bash">docker compose up --build
</code></pre><p>Check:</p><pre><code class="language-bash">curl http://127.0.0.1:8000/api/v1/health
</code></pre><hr><h2 id="what%E2%80%99s-next-step-3-preview"><strong>What&#x2019;s Next (Step 3 Preview)</strong></h2><p>In <strong>FastAPI Blueprint 03: Core FastAPI Structure</strong>, we&#x2019;ll:</p><ul><li>Break routes into modules</li><li>Introduce dependency injection for shared logic</li><li>Set up API versioning conventions for long-term growth</li></ul><hr><p>&#x2705; <strong>Checklist for Chapter 2 Completion</strong></p><ul><li>Poetry or pip installed and configured</li><li><code>.env</code> and <code>.env.example</code> created</li><li><code>Settings</code> class reads environment variables</li><li>App loads configs and logging from <code>.env</code></li><li>Dockerfile + docker-compose.yml tested locally</li><li><code>/health</code> endpoint reachable at <code>http://localhost:8000/api/v1/health</code></li></ul><hr><p>If you want, I can now <strong>generate the matching &#x201C;Step 2&#x201D; feature image</strong> with the same blueprint background as Step 1, but updated to say:<br><strong>FastAPI Blueprint 02: Project Setup &amp; Environment</strong><br>That way your series stays visually consistent.</p><p>Do you want me to make it?</p>]]></content:encoded></item><item><title><![CDATA[One-Key-per-Repo: A Simple Workflow for Managing Multiple Git Accounts]]></title><description><![CDATA[<p></p><p>Ever juggled code across personal, work, and side-project accounts only to push with the wrong credentials? In this post you&#x2019;ll build a <strong>two-script toolkit</strong> that solves the problem:</p><ol><li><code><strong>generate_ssh_key.sh</strong></code> &#x2013; creates a fresh SSH key and drops a ready-to-paste stanza into <code>~/.ssh/config</code>.</li><li><code><strong>checkout</strong></code> &#x2013;</li></ol>]]></description><link>https://www.codeandcompass.net/how-to-automate-the-creation-of-ssh-files-and-checkout-out-from-multiple-account-and-repositories/</link><guid isPermaLink="false">689642fcf2d3000001061eff</guid><dc:creator><![CDATA[Kevin Edraki]]></dc:creator><pubDate>Fri, 08 Aug 2025 18:42:35 GMT</pubDate><media:content url="https://www.codeandcompass.net/content/images/2025/08/Gemini_Generated_Image_lnejnnlnejnnlnej.jpeg" medium="image"/><content:encoded><![CDATA[<img src="https://www.codeandcompass.net/content/images/2025/08/Gemini_Generated_Image_lnejnnlnejnnlnej.jpeg" alt="One-Key-per-Repo: A Simple Workflow for Managing Multiple Git Accounts"><p></p><p>Ever juggled code across personal, work, and side-project accounts only to push with the wrong credentials? In this post you&#x2019;ll build a <strong>two-script toolkit</strong> that solves the problem:</p><ol><li><code><strong>generate_ssh_key.sh</strong></code> &#x2013; creates a fresh SSH key and drops a ready-to-paste stanza into <code>~/.ssh/config</code>.</li><li><code><strong>checkout</strong></code> &#x2013; rewrites any Git URL so <code>git clone</code> automatically uses the right key for that host.</li></ol><p>Along the way you&#x2019;ll:</p><ul><li>Organize your scripts in a dedicated directory</li><li>Add that directory to <code>PATH</code></li><li>Test the flow end-to-end</li></ul><hr><h2 id="why-bother-with-per-repo-keys">Why bother with per-repo keys?</h2><ul><li><strong>Security</strong> &#x2013; revoke a single key without touching other repos.</li><li><strong>Separation of concerns</strong> &#x2013; clean audit trails for each Git hosting account.</li><li><strong>Convenience</strong> &#x2013; eliminate <code>GIT_SSH_COMMAND</code> hacks or accidental pushes with the wrong identity.</li></ul><hr><h2 id="project-layout">Project layout</h2><pre><code>~/Development/personal/innovate-hub/tools/scripts/
&#x2502;
&#x251C;&#x2500;&#x2500; generate_ssh_key.sh  # script 1
&#x2514;&#x2500;&#x2500; checkout             # script 2  (no .sh for nicer UX)
</code></pre><p>Make sure the directory exists:</p><pre><code class="language-bash">mkdir -p ~/Development/personal/innovate-hub/tools/scripts
</code></pre><hr><h2 id="script-1-%E2%80%93-generatesshkeysh">Script 1 &#x2013; generate_ssh_key.sh</h2><pre><code class="language-bash">#!/usr/bin/env bash
# Usage: generate_ssh_key.sh &lt;key-name&gt;

set -euo pipefail

if [[ -z &quot;${1-}&quot; ]]; then
  echo &quot;Usage: $0 &lt;key_name&gt;&quot;; exit 1
fi

KEY_NAME=&quot;$1&quot;
SSH_DIR=&quot;$HOME/.ssh&quot;
KEY_PATH=&quot;$SSH_DIR/id_rsa_${KEY_NAME}&quot;

if [[ -f &quot;$KEY_PATH&quot; ]]; then
  echo &quot;Key ${KEY_NAME} already exists at ${KEY_PATH}&quot;; exit 1
fi

mkdir -p &quot;$SSH_DIR&quot; &amp;&amp; chmod 700 &quot;$SSH_DIR&quot;

ssh-keygen -t rsa -b 4096 -f &quot;${KEY_PATH}&quot; -N &quot;&quot; -C &quot;${KEY_NAME}@$(hostname)&quot;

cat &lt;&lt;EOF

# &#x2500;&#x2500;&#x2500; Add this to ~/.ssh/config &#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;
Host ${KEY_NAME}
  HostName github.com            # or gitlab.com, bitbucket.org, etc.
  User git
  IdentityFile ${KEY_PATH}
  IdentitiesOnly yes
# &#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;

Public key (paste into your Git provider):
$(cat &quot;${KEY_PATH}.pub&quot;)
EOF
</code></pre><p><strong>What it does</strong></p><ul><li>Bails if no key name is provided.</li><li>Generates <code>id_rsa_&lt;key&gt;</code> with a 4096-bit key size.</li><li>Prints a ready-made <code>Host</code> block so you never mistype paths in <code>ssh_config</code>.</li></ul><hr><h2 id="script-2-%E2%80%93-checkout">Script 2 &#x2013; checkout</h2><pre><code class="language-bash">#!/usr/bin/env bash
# Usage: checkout &lt;git-url&gt;
# Example: checkout https://github.com/kiarash/my-repo.git
#          checkout git@gitlab.com:kiarash/secret.git

set -euo pipefail

URL=&quot;${1-}&quot;
[[ -z &quot;$URL&quot; ]] &amp;&amp; { echo &quot;Usage: $0 &lt;git-url&gt;&quot;; exit 1; }

CONFIG=&quot;$HOME/.ssh/config&quot;

# Extract hostname and repo path
if [[ &quot;$URL&quot; =~ ^git@([^:]+):(.+)$ ]]; then
  HOST=&quot;${BASH_REMATCH[1]}&quot;
  PATH_PART=&quot;${BASH_REMATCH[2]}&quot;
elif [[ &quot;$URL&quot; =~ ^https?://([^/]+)/(.+)$ ]]; then
  HOST=&quot;${BASH_REMATCH[1]}&quot;
  PATH_PART=&quot;${BASH_REMATCH[2]}&quot;
else
  echo &quot;Unrecognised URL: $URL&quot;; exit 1
fi

# Find matching Host alias in ~/.ssh/config
ALIAS=$(awk -v h=&quot;$HOST&quot; &apos;
  $1==&quot;Host&quot;   {host=$2}
  $1==&quot;HostName&quot; &amp;&amp; $2==h {print host}
&apos; &quot;$CONFIG&quot;)

[[ -z &quot;$ALIAS&quot; ]] &amp;&amp; { echo &quot;No Host entry for ${HOST}&quot;; exit 1; }

NEW_URL=&quot;git@${ALIAS}:${PATH_PART}&quot;
echo &quot;Cloning with ${NEW_URL}&quot;
git clone &quot;${NEW_URL}&quot;
</code></pre><p><strong>What it does</strong></p><ol><li>Parses the incoming URL (SSH or HTTPS).</li><li>Looks up a <code>HostName</code> match inside <code>~/.ssh/config</code>.</li><li>Rewrites the URL to <code>git@&lt;Host-alias&gt;:repo/path.git</code>.</li><li>Runs <code>git clone</code> so the correct key is automatically selected.</li></ol><hr><h2 id="make-the-scripts-executable">Make the scripts executable</h2><pre><code class="language-bash">chmod +x ~/Development/personal/innovate-hub/tools/scripts/{generate_ssh_key.sh,checkout}
</code></pre><hr><h2 id="add-the-scripts-directory-to-path">Add the scripts directory to PATH</h2><p>Edit <code>~/.zshrc</code> (or <code>~/.profile</code> if you prefer it shell-agnostic):</p><pre><code class="language-bash">export PATH=&quot;$PATH:$HOME/Development/personal/innovate-hub/tools/scripts&quot;
</code></pre><p>Then reload:</p><pre><code class="language-bash">source ~/.zshrc   # or source ~/.profile
</code></pre><p>Confirm:</p><pre><code class="language-bash">which checkout
# &#x2192; /home/kevin/Development/personal/innovate-hub/tools/scripts/checkout
</code></pre><hr><h2 id="putting-it-all-together">Putting it all together</h2><ol><li><strong>Work as usual</strong> &#x2013; all future <code>git pull/push</code> operations in that repo use the linked key.</li></ol><p><strong>Clone using the smart checkout</strong>:</p><pre><code class="language-bash">cd ~/code
checkout https://github.com/kiarash/my-repo.git
</code></pre><p>The script maps <code>github-personal</code>, rewrites the URL, and clones with the right key.</p><p><strong>Generate a key</strong> per account or repo:</p><pre><code class="language-bash">generate_ssh_key.sh github-personal
</code></pre><p><em>Paste the public key into GitHub and add the printed block to <code>~/.ssh/config</code>.</em></p><hr><h2 id="troubleshooting">Troubleshooting</h2>
<!--kg-card-begin: html-->
<table>
<thead>
<tr>
<th>Symptom</th>
<th>Fix</th>
</tr>
</thead>
<tbody>
<tr>
<td><code inline>No Host entry for github.com</code></td>
<td>Double-check the <code inline>HostName</code> field in your <code inline>~/.ssh/config</code>.</td>
</tr>
<tr>
<td>Permission denied (publickey)</td>
<td>Ensure you added the <strong>public</strong> key to your Git provider and that your private key file permissions are <code inline>600</code>.</td>
</tr>
<tr>
<td><code inline>checkout</code> not found</td>
<td>Re-source your shell config or verify the <code inline>PATH</code> export.</td>
</tr>
</tbody>
</table>
<!--kg-card-end: html-->
<hr><p>Happy cloning &#x2013; and no more mismatched keys!</p>]]></content:encoded></item><item><title><![CDATA[How to Add Google Tag Manager to Your Ghost Blog]]></title><description><![CDATA[<p>To publish the guide on your Ghost blog with all the necessary formatting, you can copy and paste the text below directly into the Ghost editor. The markdown headings, lists, and bold text are already included.</p><hr><p>How to Add Google Tag Manager to Your Ghost Blog</p><p>Are you looking to</p>]]></description><link>https://www.codeandcompass.net/how-to-add-google-tag-manager-to-your-ghost-blog/</link><guid isPermaLink="false">6895afdef2d3000001061eee</guid><dc:creator><![CDATA[Kevin Edraki]]></dc:creator><pubDate>Fri, 08 Aug 2025 08:07:58 GMT</pubDate><media:content url="https://www.codeandcompass.net/content/images/2025/08/Gemini_Generated_Image_3bf7sg3bf7sg3bf7.jpeg" medium="image"/><content:encoded><![CDATA[<img src="https://www.codeandcompass.net/content/images/2025/08/Gemini_Generated_Image_3bf7sg3bf7sg3bf7.jpeg" alt="How to Add Google Tag Manager to Your Ghost Blog"><p>To publish the guide on your Ghost blog with all the necessary formatting, you can copy and paste the text below directly into the Ghost editor. The markdown headings, lists, and bold text are already included.</p><hr><p>How to Add Google Tag Manager to Your Ghost Blog</p><p>Are you looking to get more out of your Ghost blog? Implementing <strong>Google Tag Manager (GTM)</strong> is a fantastic way to track your audience&apos;s behavior, manage third-party scripts, and supercharge your analytics&#x2014;all without constantly editing your theme files.</p><p>The good news is, adding GTM to a Ghost site is a straightforward process, thanks to the platform&apos;s built-in <strong>Code Injection</strong> feature. Here&#x2019;s a simple, step-by-step guide to get you up and running.</p><hr><h4 id="step-1-grab-your-gtm-code-snippets">Step 1: Grab Your GTM Code Snippets</h4><p>First, you&apos;ll need to create a Google Tag Manager account and a new container for your blog if you haven&apos;t already. Once that&apos;s done, GTM will provide you with two unique code snippets:</p><ol><li>A <strong><code>&lt;head&gt;</code> snippet</strong>, which should be placed as high as possible in the <code>&lt;head&gt;</code> section of your site.</li><li>A <strong><code>&lt;body&gt;</code> snippet</strong>, which needs to be placed immediately after the opening <code>&lt;body&gt;</code> tag.</li></ol><p>Keep these two snippets handy, as you&apos;ll be using them in the next step.</p><hr><h4 id="step-2-use-ghosts-code-injection">Step 2: Use Ghost&apos;s Code Injection</h4><p>Ghost&#x2019;s admin dashboard has a powerful feature for adding custom code without touching the theme&apos;s core files. This is exactly what we&apos;ll use for GTM.</p><ol><li>Log into your Ghost admin dashboard.</li><li>In the left-hand menu, click on <strong>Settings</strong>.</li><li>Scroll down to the <strong>Advanced</strong> section and select <strong>Code Injection</strong>.</li></ol><p>You&apos;ll see two text areas: <strong>Site Header</strong> and <strong>Site Footer</strong>.</p><ul><li><strong>For the <code>&lt;head&gt;</code> snippet:</strong> Copy the first GTM code snippet and paste it into the <strong>Site Header</strong> field. This ensures the code is loaded on every page, just before the closing <code>&lt;/head&gt;</code> tag.</li><li><strong>For the <code>&lt;body&gt;</code> snippet:</strong> While GTM recommends placing this snippet immediately after the opening <code>&lt;body&gt;</code> tag, Ghost&apos;s Code Injection feature only provides a <strong>Site Footer</strong> field, which places code right before the closing <code>&lt;/body&gt;</code> tag. This is a standard and acceptable workaround for Ghost users. Simply paste the second GTM code snippet into the <strong>Site Footer</strong> field.</li></ul><ol start="4"><li>Click the <strong>Save</strong> button in the top-right corner.</li></ol><hr><h4 id="step-3-verify-your-installation-with-tag-assistant">Step 3: Verify Your Installation with Tag Assistant</h4><p>After saving your changes, it&apos;s essential to make sure everything is working correctly.</p><ol><li>Go back to your Google Tag Manager workspace.</li><li>Click the <strong>Preview</strong> button.</li><li>A new window will pop up. Enter the URL of your Ghost blog and click <strong>Connect</strong>.</li></ol><p>This will open your website in a new tab with the <strong>Tag Assistant</strong> debug console at the bottom of the screen. If you see your container ID listed and the status shows &quot;Connected,&quot; you&apos;ve successfully installed Google Tag Manager on your Ghost blog. You can now start adding and managing all your tags, triggers, and variables directly from the GTM interface.</p>]]></content:encoded></item><item><title><![CDATA[🚀 How to Install pyenv on Ubuntu (The Easy Way)]]></title><description><![CDATA[<hr><h2 id="%F0%9F%9A%80-how-to-install-pyenv-on-ubuntu-the-easy-way">&#x1F680; How to Install pyenv on Ubuntu (The Easy Way)</h2><p>If you&#x2019;ve ever juggled multiple Python versions on your machine and felt the pain of dependency conflicts, <code>pyenv</code> is your new best friend. It&apos;s a simple yet powerful tool that allows you to install and manage</p>]]></description><link>https://www.codeandcompass.net/how-to-install-pyenv-on-ubuntu-the-easy-way/</link><guid isPermaLink="false">6895aca5f2d3000001061edd</guid><dc:creator><![CDATA[Kevin Edraki]]></dc:creator><pubDate>Fri, 08 Aug 2025 08:00:32 GMT</pubDate><media:content url="https://www.codeandcompass.net/content/images/2025/08/Gemini_Generated_Image_6bxwka6bxwka6bxw.jpeg" medium="image"/><content:encoded><![CDATA[<hr><h2 id="%F0%9F%9A%80-how-to-install-pyenv-on-ubuntu-the-easy-way">&#x1F680; How to Install pyenv on Ubuntu (The Easy Way)</h2><img src="https://www.codeandcompass.net/content/images/2025/08/Gemini_Generated_Image_6bxwka6bxwka6bxw.jpeg" alt="&#x1F680; How to Install pyenv on Ubuntu (The Easy Way)"><p>If you&#x2019;ve ever juggled multiple Python versions on your machine and felt the pain of dependency conflicts, <code>pyenv</code> is your new best friend. It&apos;s a simple yet powerful tool that allows you to install and manage multiple Python versions on the same system without headaches.</p><p>In this guide, I&#x2019;ll walk you through how to install <code>pyenv</code> on <strong>Ubuntu</strong> step by step. Whether you&apos;re a beginner or a seasoned developer, this will help you get started quickly and cleanly.</p><hr><h3 id="%F0%9F%A7%B0-what-is-pyenv-and-why-should-you-use-it">&#x1F9F0; What is <code>pyenv</code> and Why Should You Use It?</h3><p><code>pyenv</code> lets you easily:</p><ul><li>Install multiple versions of Python side-by-side</li><li>Switch Python versions globally or per-project</li><li>Create isolated environments (with <code>pyenv-virtualenv</code>)</li></ul><p>This is especially useful if you&apos;re working on different projects that require different Python versions.</p><hr><h2 id="%F0%9F%9B%A0%EF%B8%8F-step-1-install-required-dependencies">&#x1F6E0;&#xFE0F; Step 1: Install Required Dependencies</h2><p>Before installing <code>pyenv</code>, make sure your system has the necessary build tools and libraries:</p><pre><code class="language-bash">sudo apt update
sudo apt install -y make build-essential libssl-dev zlib1g-dev \
libbz2-dev libreadline-dev libsqlite3-dev wget curl llvm \
libncursesw5-dev xz-utils tk-dev libxml2-dev libxmlsec1-dev \
libffi-dev liblzma-dev git
</code></pre><p>These packages are essential for compiling and running different versions of Python from source.</p><hr><h2 id="%F0%9F%93%A5-step-2-install-pyenv-via-curl">&#x1F4E5; Step 2: Install <code>pyenv</code> via Curl</h2><p>Use the official installation script to install <code>pyenv</code> and its plugins:</p><pre><code class="language-bash">curl https://pyenv.run | bash
</code></pre><p>This will install:</p><ul><li><code>pyenv</code></li><li><code>pyenv-virtualenv</code> (for managing virtual environments)</li><li><code>pyenv-update</code> (for keeping it up-to-date)</li><li><code>pyenv-doctor</code> (for troubleshooting)</li></ul><hr><h2 id="%F0%9F%A7%A0-step-3-add-pyenv-to-your-shell">&#x1F9E0; Step 3: Add pyenv to Your Shell</h2><p>Now, we need to let your shell know about <code>pyenv</code>. Add the following lines to your shell configuration file.</p><h3 id="if-you-use-bash">If you use <strong>bash</strong>:</h3><pre><code class="language-bash">nano ~/.bashrc
</code></pre><p>Add this at the end of the file:</p><pre><code class="language-bash">export PATH=&quot;$HOME/.pyenv/bin:$PATH&quot;
eval &quot;$(pyenv init --path)&quot;
eval &quot;$(pyenv init -)&quot;
eval &quot;$(pyenv virtualenv-init -)&quot;
</code></pre><p>Then reload your shell:</p><pre><code class="language-bash">source ~/.bashrc
</code></pre><h3 id="if-you-use-zsh">If you use <strong>zsh</strong>:</h3><p>Do the same in <code>~/.zshrc</code>, and run:</p><pre><code class="language-bash">source ~/.zshrc
</code></pre><hr><h2 id="%E2%9C%85-step-4-verify-the-installation">&#x2705; Step 4: Verify the Installation</h2><p>You&#x2019;re almost done! To make sure everything is working, run:</p><pre><code class="language-bash">pyenv --version
</code></pre><p>You should see the installed version of <code>pyenv</code>. If that works, you&#x2019;re ready to start installing Python versions.</p><hr><h2 id="%F0%9F%90%8D-step-5-install-a-python-version">&#x1F40D; Step 5: Install a Python Version</h2><p>Let&#x2019;s install Python 3.12.2 as an example:</p><pre><code class="language-bash">pyenv install 3.12.2
</code></pre><p>Once installed, set it as the global (default) Python version:</p><pre><code class="language-bash">pyenv global 3.12.2
</code></pre><p>You can confirm it with:</p><pre><code class="language-bash">python --version
</code></pre><hr><h2 id="%F0%9F%93%A6-bonus-create-a-virtual-environment">&#x1F4E6; Bonus: Create a Virtual Environment</h2><p>Want an isolated environment for a project?</p><pre><code class="language-bash">pyenv virtualenv 3.12.2 myenv
pyenv activate myenv
</code></pre><p>Now you can install packages without affecting your system Python.</p><hr><h2 id="%F0%9F%92%A1-final-thoughts">&#x1F4A1; Final Thoughts</h2><p>Managing Python versions doesn&#x2019;t have to be messy. <code>pyenv</code> gives you clean control over your development environment, especially when working across projects with different dependencies.</p><p>If you&apos;re working on Ubuntu, this setup will save you time and headaches in the long run. Once you&#x2019;ve got <code>pyenv</code> running, pair it with tools like <code>poetry</code> or <code>pip-tools</code> to take your Python workflow to the next level.</p><p>Got questions? Drop them in the comments &#x2014; I&#x2019;m happy to help.</p><hr>]]></content:encoded></item><item><title><![CDATA[Waiting Room Anthropology: What 200 Strangers and Their Phones Taught Me About Modern Loneliness]]></title><description><![CDATA[<p></p><p>The fluorescent lights hummed overhead in the San Francisco jury assembly room, casting that particular institutional glow that makes everyone look slightly unwell. Around me, nearly 200 people sat in rigid plastic chairs, summoned by civic duty to this windowless room in the Hall of Justice. It should have been</p>]]></description><link>https://www.codeandcompass.net/waiting-room-anthropology-what-200-strangers-and-their-phones-taught-me-about-modern-loneliness/</link><guid isPermaLink="false">6883067305a6dc0001451185</guid><dc:creator><![CDATA[Kevin Edraki]]></dc:creator><pubDate>Fri, 25 Jul 2025 04:28:37 GMT</pubDate><media:content url="https://www.codeandcompass.net/content/images/2025/07/pexels-karolina-grabowska-7876093.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://www.codeandcompass.net/content/images/2025/07/pexels-karolina-grabowska-7876093.jpg" alt="Waiting Room Anthropology: What 200 Strangers and Their Phones Taught Me About Modern Loneliness"><p></p><p>The fluorescent lights hummed overhead in the San Francisco jury assembly room, casting that particular institutional glow that makes everyone look slightly unwell. Around me, nearly 200 people sat in rigid plastic chairs, summoned by civic duty to this windowless room in the Hall of Justice. It should have been a fascinating social experiment&#x2014;a true cross-section of the city forced together for hours with nothing but time.</p><p>Instead, it felt like sitting in a library of ghosts.</p><p>I counted the interactions during my first hour: a woman asked someone to watch her purse while she used the restroom. A man inquired about the WiFi password. That was it. Two conversations among two hundred people.</p><p>The rest of us existed in our own digital bubbles, necks craned downward in what chiropractors have dubbed &quot;text neck,&quot; fingers scrolling through endless feeds of other people&apos;s lives while ignoring the actual lives sitting eighteen inches away. We had become experts at being alone together.</p><h2 id="the-great-divide">The Great Divide</h2><p>What struck me most wasn&apos;t the silence itself, but the clear generational fault line running through the room. The older citizens&#x2014;those who remembered life before smartphones became appendages&#x2014;were the ones attempting eye contact. They looked around with the patience of people accustomed to waiting without entertainment, occasionally offering shy smiles or commenting on the proceedings.</p><p>A woman in her seventies near me kept glancing hopefully at her neighbors, clearly ready for conversation. She reminded me of my grandmother, who could strike up a meaningful chat with a stranger at the grocery store checkout line. But her potential conversation partners were busy photographing their jury summons for Instagram stories or catching up on work emails.</p><p>I watched one man, probably in his sixties, try three separate times to make small talk with people around him. Each attempt was met with polite but brief responses before his intended conversation partners returned to their screens. Eventually, he gave up and stared at the informational posters on the wall with the resigned attention of someone studying for a test.</p><h2 id="the-irony-of-civic-isolation">The Irony of Civic Isolation</h2><p>There&apos;s something deeply ironic about this scene. Jury duty represents one of our most fundamental civic responsibilities&#x2014;the idea that ordinary citizens can come together, listen to evidence, and make fair decisions that affect real lives. It&apos;s predicated on our ability to connect with and understand our fellow humans, to see past our own biases and experiences.</p><p>Yet here we were, practicing isolation before we&apos;d even reached the courtroom.</p><p>The orientation video played to a room full of people barely watching. It explained the importance of avoiding bias, of listening carefully, of considering different perspectives. But how can we practice those skills if we can&apos;t even make eye contact with the person next to us? How do we build the empathy muscles required for fair deliberation when we&apos;ve forgotten how to engage with strangers?</p><h2 id="what-weve-lost">What We&apos;ve Lost</h2><p>I found myself mourning something I&apos;d never experienced: the jury duty of previous generations. I imagined rooms where people actually talked&#x2014;where a retired teacher might chat with a construction worker about their kids&apos; schools, where a recent immigrant might share stories with a fourth-generation San Franciscan, where the kind of casual &quot;nosiness&quot; that builds community bonds would naturally emerge.</p><p>That nosiness wasn&apos;t just idle curiosity&#x2014;it was practice. Practice at being interested in people different from ourselves. Practice at finding common ground with strangers. Practice at the fundamental human skill of connection that democracy requires.</p><p>Instead, we&apos;ve created a society where the default response to uncomfortable silence or unfamiliar people is to retreat into our devices. We&apos;ve become so accustomed to curated connection&#x2014;choosing exactly who we interact with and when&#x2014;that random human contact feels almost invasive.</p><h2 id="the-digital-comfort-zone">The Digital Comfort Zone</h2><p>Our phones have become emotional support objects, providing instant escape from the mild anxiety of being present with strangers. Why make awkward small talk when you can scroll through TikTok? Why wonder about the person next to you when you can catch up with friends on WhatsApp? Why sit with the discomfort of silence when Netflix has episodes downloaded and ready?</p><p>But in choosing comfort, we&apos;ve lost something essential. Those uncomfortable moments of waiting, of not knowing what to say, of reaching across difference&#x2014;those are exactly the moments where empathy grows. They&apos;re the social equivalent of physical exercise: mildly unpleasant but ultimately strengthening.</p><h2 id="the-older-generations-gift">The Older Generation&apos;s Gift</h2><p>Watching the older jurors reminded me that this withdrawal isn&apos;t inevitable&#x2014;it&apos;s learned. These were people who remembered when waiting rooms were places of conversation, when delayed flights meant talking to fellow passengers, when being bored together was an opportunity rather than a problem to solve with technology.</p><p>One elderly man eventually started reading a physical newspaper, and I noticed how it became a conversation starter. People could see what he was reading, comment on headlines, ask questions. His newspaper was public in a way our private screens never are. It invited engagement rather than signaling unavailability.</p><h2 id="what-were-teaching-our-juries">What We&apos;re Teaching Our Juries</h2><p>As I sat there, I couldn&apos;t help but wonder: what kind of jurors are we creating in this age of digital isolation? If we can&apos;t practice the basic human skill of talking to strangers in a waiting room, how do we suddenly develop the ability to engage meaningfully in a jury room?</p><p>The skills are the same: listening to people different from ourselves, managing discomfort and uncertainty, staying present when things get awkward or difficult, finding common ground despite obvious differences. But we&apos;re out of practice. We&apos;ve forgotten how to be curious about each other.</p><h2 id="the-path-back">The Path Back</h2><p>I&apos;m not advocating for throwing our phones in the trash or returning to some imaginary golden age. But I am suggesting that we&apos;ve lost something valuable in our rush toward digital connection, and spaces like jury duty waiting rooms reveal the cost.</p><p>Maybe the solution starts small. Maybe it&apos;s as simple as looking up more often, making eye contact, asking someone how their day is going. Maybe it&apos;s remembering that the person next to us has a story worth hearing, problems worth understanding, perspectives worth considering.</p><p>Maybe it&apos;s recognizing that our civic duty begins not in the courtroom, but in the waiting room&#x2014;with the simple, radical act of seeing each other as more than just fellow screen-gazers.</p><p>Because if we can&apos;t connect over the shared experience of jury duty, how can we expect to connect over the shared experience of democracy? And if we can&apos;t practice empathy with the stranger sitting next to us, how can we practice it with the defendant whose fate might rest in our hands?</p><p>The fluorescent lights continued to hum. Around me, 200 people continued to scroll. But for the first time that morning, I put my phone away and looked around&#x2014;really looked&#x2014;at my fellow citizens. Some of them, I noticed, were looking back.</p>]]></content:encoded></item><item><title><![CDATA[Ultimate Guide: Managing Multiple Monitors on Ubuntu with DisplayLink & NVIDIA GPUs]]></title><description><![CDATA[<h1 id>&#xA0;</h1><h2 id="%F0%9F%94%8D-why-this-guide">&#x1F50D; Why this guide?</h2><p>Running a mixed&#x2011;GPU desktop (USB&#x2011;based <strong>DisplayLink</strong> adapters <em>plus</em> an <strong>NVIDIA&#xAE; GeForce</strong> card) on Ubuntu is incredibly powerful&#x2014;but it can be tricky.<br>Common headaches include:</p><ul><li>Monitors that randomly reorder after every reboot.</li><li>DisplayLink screens not waking from sleep.</li><li>&#x201C;</li></ul>]]></description><link>https://www.codeandcompass.net/ultimate-guide-managing-multiple-monitors-on-ubuntu-with-displaylink-nvidia-gpus/</link><guid isPermaLink="false">6853a6e9128e0c00010f8a17</guid><dc:creator><![CDATA[Kevin Edraki]]></dc:creator><pubDate>Thu, 19 Jun 2025 06:03:59 GMT</pubDate><media:content url="https://www.codeandcompass.net/content/images/2025/06/ChatGPT-Image-Jun-18--2025--11_03_33-PM.png" medium="image"/><content:encoded><![CDATA[<h1 id>&#xA0;</h1><h2 id="%F0%9F%94%8D-why-this-guide">&#x1F50D; Why this guide?</h2><img src="https://www.codeandcompass.net/content/images/2025/06/ChatGPT-Image-Jun-18--2025--11_03_33-PM.png" alt="Ultimate Guide: Managing Multiple Monitors on Ubuntu with DisplayLink &amp; NVIDIA GPUs"><p>Running a mixed&#x2011;GPU desktop (USB&#x2011;based <strong>DisplayLink</strong> adapters <em>plus</em> an <strong>NVIDIA&#xAE; GeForce</strong> card) on Ubuntu is incredibly powerful&#x2014;but it can be tricky.<br>Common headaches include:</p><ul><li>Monitors that randomly reorder after every reboot.</li><li>DisplayLink screens not waking from sleep.</li><li>&#x201C;Unknown display&#x201D; errors in <em>Settings&#xA0;&#x2192;&#xA0;Displays</em>.</li></ul><p>This post walks you through a <strong>bullet&#x2011;proof, script&#x2011;driven workflow</strong> that:</p><ol><li>Installs the correct drivers.</li><li>Restores your preferred layout automatically at login.</li><li>Fixes the most common pitfalls.</li></ol><hr><h2 id="table-of-contents">Table of Contents&#xA0;</h2><ol><li><a href="https://chatgpt.com/c/68539dbc-6cc0-800f-b334-10c80ed7e7f0?ref=codeandcompass.net#prerequisites">Prerequisites &amp; Hardware</a></li><li><a href="https://chatgpt.com/c/68539dbc-6cc0-800f-b334-10c80ed7e7f0?ref=codeandcompass.net#step1">Step&#xA0;1&#xA0;&#x2014; Install / Clean DisplayLink Drivers</a></li><li><a href="https://chatgpt.com/c/68539dbc-6cc0-800f-b334-10c80ed7e7f0?ref=codeandcompass.net#step2">Step&#xA0;2&#xA0;&#x2014; Map Your Monitors with&#xA0;<code>xrandr</code></a></li><li><a href="https://chatgpt.com/c/68539dbc-6cc0-800f-b334-10c80ed7e7f0?ref=codeandcompass.net#step3">Step&#xA0;3&#xA0;&#x2014; Create an Auto&#x2011;Layout Script</a></li><li><a href="https://chatgpt.com/c/68539dbc-6cc0-800f-b334-10c80ed7e7f0?ref=codeandcompass.net#step4">Step&#xA0;4&#xA0;&#x2014; Run the Script at Startup</a></li><li><a href="https://chatgpt.com/c/68539dbc-6cc0-800f-b334-10c80ed7e7f0?ref=codeandcompass.net#troubleshooting">Troubleshooting&#xA0;&amp; FAQs</a></li><li><a href="https://chatgpt.com/c/68539dbc-6cc0-800f-b334-10c80ed7e7f0?ref=codeandcompass.net#tips">Power&#x2011;User Tips</a></li><li><a href="https://chatgpt.com/c/68539dbc-6cc0-800f-b334-10c80ed7e7f0?ref=codeandcompass.net#resources">SEO Resources &amp; Further Reading</a></li></ol><hr><p></p><h2 id="1-prerequisites-hardware-checklist">1.&#xA0;Prerequisites &amp; Hardware Checklist&#xA0;</h2>
<!--kg-card-begin: html-->
<table>
<thead>
<tr>
<th>Item</th>
<th>Recommended Version</th>
<th>Notes</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>Ubuntu</strong></td>
<td>22.04&#xA0;LTS or 24.04&#xA0;LTS</td>
<td>Xorg or Wayland (guide uses Xorg for simplicity)</td>
</tr>
<tr>
<td><strong>NVIDIA Driver</strong></td>
<td>Proprietary&#xA0;550+</td>
<td>Install via <code inline>Additional Drivers</code> or <code inline>apt</code></td>
</tr>
<tr>
<td><strong>DisplayLink Driver</strong></td>
<td>5.11&#xA0;for Ubuntu</td>
<td>Download from Synaptics &#x2192; unzip &#x2192; <code inline>./displaylink-installer.sh install</code></td>
</tr>
<tr>
<td><strong>Kernel Headers</strong></td>
<td>Must match running kernel</td>
<td><code inline>sudo apt install linux-headers-$(uname -r)</code></td>
</tr>
<tr>
<td><strong>xrandr</strong></td>
<td>1.5+</td>
<td>Already included in <code inline>xorg-xrandr</code></td>
</tr>
</tbody>
</table>
<!--kg-card-end: html-->
<blockquote><strong>Tip:</strong> Always remove <em>older</em> EVDI DKMS packages before installing a new DisplayLink build:<br><code>sudo dkms remove evdi/&lt;version&gt; --all</code></blockquote><hr><p></p><h2 id="2-step-1-%E2%80%94-install-clean-displaylink-drivers-%E2%9A%99%EF%B8%8F">2.&#xA0;Step&#xA0;1&#xA0;&#x2014; Install / Clean DisplayLink Drivers &#x2699;&#xFE0F;&#xA0;</h2><p><strong>Reboot &amp; verify</strong></p><pre><code class="language-bash">systemctl status displaylink-driver
lsusb | grep -i displaylink
</code></pre><p><strong>Install DisplayLink official package</strong></p><pre><code class="language-bash">unzip DisplayLink*Ubuntu*.zip
cd DisplayLink*
sudo ./displaylink-installer.sh install
</code></pre><p><strong>Purge conflicting EVDI builds</strong></p><pre><code class="language-bash">sudo apt purge evdi-dkms
sudo dkms status  # ensure nothing is left
</code></pre><p>If the service is <em>active</em> and your USB device appears, you&#x2019;re good to go.</p><hr><p></p><h2 id="3-step-2-%E2%80%94-map-your-monitors-with-xrandr">3.&#xA0;Step&#xA0;2&#xA0;&#x2014; Map Your Monitors with&#xA0;<code>xrandr</code>&#xA0;</h2><p>Run:</p><pre><code class="language-bash">xrandr --listmonitors
</code></pre><p>Sample output:</p><pre><code>Monitors: 4
 0: +*HDMI-1 1920/526x1080/296+1920+1080  HDMI-1
 1: +HDMI-0 1920/526x1080/296+0+1080     HDMI-0
 2: +DVI-I-3-1 1920/526x1080/296+0+0     DVI-I-3-1
 3: +DVI-I-4-2 1920/526x1080/296+1920+0  DVI-I-4-2
</code></pre><p>Interpretation:</p><ul><li><strong>Top row:</strong> <code>DVI-I-3-1</code> (left), <code>DVI-I-4-2</code> (right)</li><li><strong>Bottom row:</strong> <code>HDMI-0</code> (left), <code>HDMI-1</code> (right &amp; primary)</li></ul><blockquote><strong>Got different names?</strong> Copy yours exactly&#x2014;they can change from PC to PC.</blockquote><hr><p></p><h2 id="4-step-3-%E2%80%94-create-an-auto%E2%80%91layout-script-%F0%9F%93%9D">4.&#xA0;Step&#xA0;3&#xA0;&#x2014; Create an Auto&#x2011;Layout Script &#x1F4DD;&#xA0;</h2><p>Create the script:</p><pre><code class="language-bash">mkdir -p ~/.config
nano ~/.config/set-monitor-layout.sh
</code></pre><p>Paste &amp; edit as needed:</p><pre><code class="language-bash">#!/bin/bash
# &#x2014;&#x2014; Fix monitor order for DisplayLink + NVIDIA &#x2014;&#x2014;

xrandr \
  --output DVI-I-3-1 --mode 1920x1080 --pos 0x0     --rotate normal \
  --output DVI-I-4-2 --mode 1920x1080 --pos 1920x0  --rotate normal \
  --output HDMI-1    --mode 1920x1080 --pos 0x1080  --rotate normal \
  --output HDMI-0    --primary --mode 1920x1080 --pos 1920x1080 --rotate normal
</code></pre><p>Make it executable:</p><pre><code class="language-bash">chmod +x ~/.config/set-monitor-layout.sh
</code></pre><p>Test:</p><pre><code class="language-bash">~/.config/set-monitor-layout.sh
</code></pre><p>Your screens should <em>snap</em> into the saved layout.</p><hr><p></p><h2 id="5-step-4-%E2%80%94-run-the-script-at-startup-%F0%9F%9A%80">5.&#xA0;Step&#xA0;4&#xA0;&#x2014; Run the Script at Startup &#x1F680;&#xA0;</h2><p>GNOME &amp; most DEs use <code>~/.config/autostart/</code>.</p><pre><code class="language-bash">mkdir -p ~/.config/autostart
nano ~/.config/autostart/set-monitor-layout.desktop
</code></pre><p>Insert:</p><pre><code class="language-ini">[Desktop Entry]
Type=Application
Exec=bash -c &quot;sleep 7 &amp;&amp; /home/$USER/.config/set-monitor-layout.sh&quot;
Hidden=false
NoDisplay=false
X-GNOME-Autostart-enabled=true
Name=Set Monitor Layout
</code></pre><p><em>Why the <code>sleep 7</code>?</em> DisplayLink needs a few seconds to initialize after login.</p><p>Reboot and watch the magic happen. &#x2705;</p><hr><p></p><h2 id="6-troubleshooting-faqs-%F0%9F%94%A7">6.&#xA0;Troubleshooting &amp; FAQs &#x1F527;&#xA0;</h2>
<!--kg-card-begin: html-->
<table>
<thead>
<tr>
<th>Symptom</th>
<th>Fix</th>
</tr>
</thead>
<tbody>
<tr>
<td>DisplayLink monitors stay black</td>
<td>Ensure <code inline>displaylink-driver.service</code> is <em>active</em>. Re&#x2011;plug USB cable.</td>
</tr>
<tr>
<td><code inline>xrandr</code> names change after kernel update</td>
<td>Re&#x2011;run <code inline>xrandr --listmonitors</code> and update the script.</td>
</tr>
<tr>
<td>Layout script runs too early</td>
<td>Increase <code inline>sleep</code> to <code inline>10</code>&#x2013;<code inline>12</code> seconds.</td>
</tr>
<tr>
<td>Wayland session ignores script</td>
<td>Switch to <em>Xorg</em> on login screen or use <code inline>wlr-randr</code> for Wayland.</td>
</tr>
<tr>
<td>NVIDIA driver update breaks screens</td>
<td>Re&#x2011;run <code inline>nvidia-settings</code> &#x279C; <strong>Save to X Configuration File</strong>, then reboot.</td>
</tr>
</tbody>
</table>
<!--kg-card-end: html-->
<hr><p></p><h2 id="7-power%E2%80%91user-tips-%E2%9A%A1%EF%B8%8F">7.&#xA0;Power&#x2011;User Tips &#x26A1;&#xFE0F;&#xA0;</h2><ul><li><strong>Toggle scripts</strong>: create a second script to flip bottom monitors (<code>A</code> &#x1F4B1; <code>B</code>) on demand.</li><li><strong>Hotkeys</strong>: bind layout scripts in <em>Settings&#xA0;&#x2192;&#xA0;Keyboard&#xA0;&#x2192;&#xA0;Custom Shortcuts</em>.</li><li><strong>LightDM global hook</strong>: add <code>display-setup-script=/path/to/script</code> under <code>[Seat:*]</code> in <code>/etc/lightdm/lightdm.conf</code> for system&#x2011;wide layout&#x2014;great for shared PCs.</li><li><strong>Persist across docking/undocking</strong>: use <code>udev</code> rules to run the script whenever the USB DisplayLink device connects.</li></ul><hr><p></p><h2 id="8-seo-resources-further-reading-%F0%9F%8C%90">8.&#xA0;SEO Resources &amp; Further Reading &#x1F310;&#xA0;</h2><ul><li>Synaptics (DisplayLink) official Ubuntu driver downloads</li><li>Ubuntu &#x201C;Additional Drivers&#x201D; wiki</li><li><code>xrandr</code> man page: <code>man xrandr</code></li><li>NVIDIA X Server Settings documentation</li><li>GitHub:&#xA0;<a href="https://github.com/displaylink-rpm?ref=codeandcompass.net">displaylink-rpm &amp; displaylink-debian community installers</a></li></ul><hr><h2 id="%F0%9F%8E%89-wrap%E2%80%91up">&#x1F389; Wrap&#x2011;Up</h2><p>You now have a <strong>stable, hands&#x2011;free, multi&#x2011;monitor workstation</strong> on Ubuntu&#x2014;even with the notoriously finicky combo of DisplayLink and NVIDIA.<br>Set it once, forget it, and focus on getting real work done.&#xA0;</p><p><em>Happy hacking&#x2014;and may your pixels always line up!</em> &#x1F64C;</p>]]></content:encoded></item><item><title><![CDATA[Building a Multi-Environment Kubernetes Cluster for Dev, Staging, and Production]]></title><description><![CDATA[<p></p><p><strong>Building a Multi-Environment Kubernetes Cluster for Dev, Staging, and Production</strong></p><p>Kubernetes has emerged as the de facto standard for orchestrating containerized applications, allowing teams to achieve consistency, scalability, and reliability across all stages of the software development lifecycle. One of the most common patterns for modern application delivery involves having</p>]]></description><link>https://www.codeandcompass.net/building-a-multi-environment-kubernetes-cluster-for-dev-staging-and-production/</link><guid isPermaLink="false">66d7b9a255c9e70001095e34</guid><dc:creator><![CDATA[Kevin Edraki]]></dc:creator><pubDate>Wed, 04 Sep 2024 01:36:43 GMT</pubDate><media:content url="https://www.codeandcompass.net/content/images/2024/12/DALL-E-2024-12-13-21.17.25---A-visually-engaging-illustration-for-a-blog-post-about-creating-a-Kubernetes-cluster-for-development--staging--and-production-environments.-The-image-.webp" medium="image"/><content:encoded><![CDATA[<img src="https://www.codeandcompass.net/content/images/2024/12/DALL-E-2024-12-13-21.17.25---A-visually-engaging-illustration-for-a-blog-post-about-creating-a-Kubernetes-cluster-for-development--staging--and-production-environments.-The-image-.webp" alt="Building a Multi-Environment Kubernetes Cluster for Dev, Staging, and Production"><p></p><p><strong>Building a Multi-Environment Kubernetes Cluster for Dev, Staging, and Production</strong></p><p>Kubernetes has emerged as the de facto standard for orchestrating containerized applications, allowing teams to achieve consistency, scalability, and reliability across all stages of the software development lifecycle. One of the most common patterns for modern application delivery involves having multiple environments&#x2014;such as development, staging (or QA), and production&#x2014;deployed on Kubernetes. Each environment helps ensure quality, maintain isolation, and streamline continuous delivery, all while maintaining a single standardized platform.</p><p>In this article, we&#x2019;ll walk through the key considerations, patterns, and best practices you&#x2019;ll need to build a multi-environment Kubernetes cluster architecture. Whether you&#x2019;re starting from zero knowledge or refining an already complex setup, this guide aims to take you from the basics all the way to a high-quality, production-ready environment.</p><h3 id="1-understanding-the-multi-environment-model">1. Understanding the Multi-Environment Model</h3><p>When we talk about &#x201C;multi-environment&#x201D; setups, we refer to having multiple, logically separate spaces to run your applications. Typically:</p><ul><li><strong>Development (Dev)</strong>: The environment where developers test new features, experiment, and iterate rapidly. This environment may not have all the production-grade settings but should still reflect a realistic infrastructure so problems can be caught early.</li><li><strong>Staging (QA)</strong>: A near-production environment used to test release candidates, integration with other services, performance testing, and quality assurance checks. The staging environment should closely mirror production in terms of configuration and scale but might not be as large.</li><li><strong>Production (Prod)</strong>: The environment serving real users, customers, or critical business functions. It should be highly stable, secure, monitored, and fully supported by operational best practices.</li></ul><h3 id="2-why-use-kubernetes-for-multiple-environments">2. Why Use Kubernetes for Multiple Environments?</h3><p><strong>Consistency and Portability:</strong><br>Kubernetes enforces a declarative, container-based approach. Once you define how your application runs on Kubernetes, the same definition (with minor modifications) can deploy to Dev, Staging, and Production. This consistency reduces surprises and ensures that what passes tests in Staging will likely behave the same way in Production.</p><p><strong>Scalability and Resource Isolation:</strong><br>With Kubernetes, you can scale each environment independently. Developers might only need a small cluster with minimal resources for Dev, while Production can be tuned to handle thousands of simultaneous users.</p><p><strong>Built-in Deployment Strategies:</strong><br>Kubernetes natively supports advanced deployment methods (e.g., Rolling Updates, Blue-Green, Canary) that can be consistently applied across all environments. This approach further simplifies the process of moving from Dev to Staging to Production without reinventing the wheel each time.</p><h3 id="3-architecture-considerations">3. Architecture Considerations</h3><p><strong>Single Cluster vs. Multiple Clusters:</strong><br>One of the first design choices is whether to run all environments on a single Kubernetes cluster or to use separate clusters. Both approaches have their pros and cons:</p><ul><li><strong>Single Cluster with Namespaces:</strong><ul><li><strong>Pros:</strong> Easier management, fewer clusters to maintain, shared control plane, cost-effective.</li><li><strong>Cons:</strong> Less isolation between environments, risk of resource contention, security boundaries primarily rely on namespace policies and network isolation.</li></ul></li><li><strong>Multiple Clusters per Environment:</strong><ul><li><strong>Pros:</strong> Strong isolation, independent scaling and upgrades, reduced blast radius if something breaks in one environment.</li><li><strong>Cons:</strong> More overhead in managing multiple clusters, potentially higher infrastructure costs.</li></ul></li></ul><p><strong>A Common Hybrid Approach:</strong></p><ul><li><strong>Dev &amp; Staging in a Single Cluster</strong> with separate namespaces (such as <code>dev</code> and <code>staging</code>) for simplicity and cost savings.</li><li><strong>Production in a Dedicated Cluster</strong> for maximum security, scalability, and uptime.</li></ul><figure class="kg-card kg-image-card"><img src="https://www.codeandcompass.net/content/images/2024/12/pexels-divinetechygirl-1181467.jpg" class="kg-image" alt="Building a Multi-Environment Kubernetes Cluster for Dev, Staging, and Production" loading="lazy" width="2000" height="1335" srcset="https://www.codeandcompass.net/content/images/size/w600/2024/12/pexels-divinetechygirl-1181467.jpg 600w, https://www.codeandcompass.net/content/images/size/w1000/2024/12/pexels-divinetechygirl-1181467.jpg 1000w, https://www.codeandcompass.net/content/images/size/w1600/2024/12/pexels-divinetechygirl-1181467.jpg 1600w, https://www.codeandcompass.net/content/images/size/w2400/2024/12/pexels-divinetechygirl-1181467.jpg 2400w" sizes="(min-width: 720px) 720px"></figure><h3 id="4-choosing-the-infrastructure">4. Choosing the Infrastructure</h3><p><strong>Cloud Providers and Managed Kubernetes:</strong><br>Most large-scale multi-environment setups leverage managed Kubernetes offerings like GKE (Google Kubernetes Engine), EKS (Amazon Elastic Kubernetes Service), or AKS (Azure Kubernetes Service). Managed services reduce operational burden and provide integrated security, scaling, and networking features.</p><p><strong>On-Premises Clusters (If Required):</strong><br>If compliance, data sovereignty, or latency requirements dictate on-premises setups, tools like Rancher, OpenShift, or Kubernetes on bare metal can be used. However, expect more complexity in cluster maintenance and scaling.</p><h3 id="5-configuring-your-tooling">5. Configuring Your Tooling</h3><p><strong>Infrastructure as Code (IaC):</strong><br>Use tools like Terraform or Pulumi to provision clusters, networks, and load balancers. This ensures that your environments&#x2019; infrastructure can be version-controlled, peer-reviewed, and reproducible.</p><p><strong>GitOps for Deployment Management:</strong><br>Adopt GitOps principles with tools like Argo CD or Flux to manage environment configurations declaratively. In a GitOps workflow, changes to Kubernetes manifests for Dev, Staging, or Production are driven by pull requests in source control, offering a strong audit trail and easy rollback capabilities.</p><h3 id="6-namespace-strategy">6. Namespace Strategy</h3><p>When using a single cluster for multiple environments, separate them logically using namespaces. For instance:</p><ul><li><code>dev</code> namespace for development deployments</li><li><code>staging</code> namespace for QA/staging deployments</li><li><code>prod</code> namespace (in the production cluster, if separate)</li></ul><p>Apply fine-grained Role-Based Access Control (RBAC) to ensure developers can only deploy to <code>dev</code> and <code>staging</code>, while production deployments might require approvals or a CI/CD pipeline with restricted credentials.</p><h3 id="7-security-and-access-controls">7. Security and Access Controls</h3><p><strong>RBAC and Policies:</strong><br>Define clear RBAC rules so that developers have appropriate permissions in Dev and Staging but cannot impact Production. Implement Pod Security Policies (or Pod Security Standards in newer Kubernetes versions) and Network Policies to ensure that services and traffic are confined to their respective environments.</p><p><strong>Secret Management:</strong><br>Use external secret management tools like HashiCorp Vault, AWS Secrets Manager, or GCP Secret Manager, and integrate them with Kubernetes Secret objects. Each environment can have its own set of secrets with different access policies.</p><h3 id="8-resource-management-and-quotas">8. Resource Management and Quotas</h3><p>Apply ResourceQuotas and LimitRanges to prevent Dev or Staging from hogging cluster resources. Ensure that each environment&#x2019;s workloads have set CPU and memory requests and limits. This prevents a buggy development service from affecting the entire cluster&#x2019;s stability.</p><h3 id="9-networking-and-routing">9. Networking and Routing</h3><p><strong>Ingress Controllers and DNS:</strong><br>Set up Ingress Controllers (like NGINX Ingress or an Ingress Controller provided by your cloud provider) to route traffic to the correct namespace based on hostnames or paths. For example:</p><ul><li><code>app-dev.example.com</code> &#x2192; routes to the Dev namespace</li><li><code>app-staging.example.com</code> &#x2192; routes to the Staging namespace</li><li><code>app.example.com</code> &#x2192; routes to the Production cluster</li></ul><p><strong>Service Mesh (Optional):</strong><br>A service mesh like Istio or Linkerd can help manage traffic, add observability, and apply security policies consistently across all environments. It can also facilitate canary deployments and progressive rollouts for Staging and Production.</p><h3 id="10-cicd-pipelines">10. CI/CD Pipelines</h3><p><strong>Building a Pipeline for Each Environment:</strong><br>Configure your CI/CD pipeline (e.g., using GitHub Actions, GitLab CI, Jenkins, or CircleCI) so that after code is merged:</p><ul><li>Unit tests and integration tests run for Dev deployments.</li><li>If Dev passes, automatically promote images and manifests to Staging. Perform integration, load, and sanity tests there.</li><li>If Staging passes quality gates, a manual or automated approval step can promote the release to Production.</li></ul><p><strong>Immutable Container Images and Versioning:</strong><br>Tag container images with semantic versions or Git commit SHAs. Pin Deployment manifests to these specific images in each environment, ensuring reproducibility and easier rollbacks.</p><h3 id="11-observability-and-monitoring">11. Observability and Monitoring</h3><p><strong>Centralized Logging and Metrics:</strong><br>Use tools like Prometheus and Grafana for metrics and the ELK/EFK stack (Elasticsearch, Fluentd, Kibana) or OpenSearch and Loki for logs. With clear dashboards, it&#x2019;s easier to spot differences in behavior between Dev, Staging, and Production, and to diagnose issues early.</p><p><strong>Distributed Tracing:</strong><br>In complex microservices architectures, distributed tracing (e.g., with Jaeger or OpenTelemetry) helps you identify performance bottlenecks and errors across multiple services. Ensuring similar instrumentation in all environments means you can debug more efficiently.</p><h3 id="12-disaster-recovery-and-backups">12. Disaster Recovery and Backups</h3><p>For Production (and possibly Staging), implement backup and restore strategies for critical data (e.g., persistent volumes, databases). Frequent snapshots and off-site backups ensure you can recover from data loss or cluster disasters.</p><h3 id="13-testing-promotion-flows">13. Testing Promotion Flows</h3><p>Regularly test the entire promotion flow&#x2014;Dev &#x2192; Staging &#x2192; Production&#x2014;to catch configuration drift or pipeline issues. Run simulations where you roll out a new feature to Dev, verify it, then push it to Staging, run tests, and finally move it to Production. Document these processes and build confidence in your release strategy.</p><h3 id="14-handling-configuration-differences">14. Handling Configuration Differences</h3><p>While you want environments to be as similar as possible, some differences are inevitable (e.g., scaling factors, external service endpoints, database sizes). Utilize Kubernetes&#x2019; ConfigMaps, Secrets, and Helm or Kustomize overlays to parameterize these differences cleanly.</p><p>For example, maintain a baseline Helm chart for the application and have separate values files for Dev, Staging, and Production. This approach keeps your core logic consistent but allows environment-specific overrides (e.g., <code>replicaCount: 1</code> for Dev, <code>replicaCount: 3</code> for Staging, <code>replicaCount: 10</code> for Production).</p><h3 id="15-security-posture-and-compliance">15. Security Posture and Compliance</h3><p>For Production environments, ensure compliance with standards like PCI-DSS, HIPAA, or GDPR if required. This may mean tighter access control, encrypted communication for all services, and stricter network policies. Some compliance requirements may also apply to Staging for realistic testing, while Dev can remain more flexible.</p><h3 id="16-scaling-over-time">16. Scaling Over Time</h3><p>Start small. Your initial multi-environment setup may be as simple as:</p><ul><li>A single cluster with two namespaces: Dev and Staging.</li><li>One separate Production cluster.</li></ul><p>As your team grows and your workloads become more complex, you can expand to multiple clusters, add more fine-grained namespaces, incorporate a service mesh, adopt advanced deployment strategies, or integrate a more sophisticated GitOps pipeline.</p><h3 id="17-continuous-improvement-and-auditing">17. Continuous Improvement and Auditing</h3><p>Once the initial setup is running:</p><ul><li>Continuously audit resource usage, cluster node sizes, and namespace quotas.</li><li>Regularly review RBAC rules and secrets management policies.</li><li>Update cluster components (Kubernetes versions, Ingress Controller versions, etc.) according to a well-defined schedule.</li></ul><h3 id="18-summing-it-up">18. Summing It Up</h3><p>Building a multi-environment Kubernetes cluster strategy is not just about spinning up multiple clusters or namespaces. It&#x2019;s a thoughtful process that involves setting up the right infrastructure-as-code pipelines, implementing strong security and access controls, integrating observability, ensuring proper promotion flows, and maintaining a robust CI/CD strategy.</p><p>By starting from zero&#x2014;focusing first on understanding the environment model, then moving through architecture choices, tooling, security, and finally advanced techniques&#x2014;you can build a stable, scalable, and efficient multi-environment Kubernetes ecosystem. Over time, as you refine this setup and incorporate feedback from your dev, ops, and QA teams, you&#x2019;ll have a world-class platform that can quickly and safely deliver value to users across all stages of development and production.</p>]]></content:encoded></item><item><title><![CDATA[How to Push Your Helm Charts to a Docker (OCI) Registry]]></title><description><![CDATA[<p><strong>Introduction</strong><br>Helm has become the go-to package manager for Kubernetes, simplifying the deployment and management of complex applications by bundling all the required Kubernetes manifests into easily distributable &#x201C;charts.&#x201D; Traditionally, Helm charts have been hosted in dedicated Helm repositories or stored as files in version control systems. However,</p>]]></description><link>https://www.codeandcompass.net/coming-soon/</link><guid isPermaLink="false">66d7b4f7edc23600015570c8</guid><category><![CDATA[News]]></category><dc:creator><![CDATA[Kevin Edraki]]></dc:creator><pubDate>Wed, 04 Sep 2024 01:16:39 GMT</pubDate><media:content url="https://www.codeandcompass.net/content/images/2024/12/DALL-E-2024-12-16-13.27.07---A-conceptual-digital-illustration-depicting-the-movement-of-code-represented-as-glowing-lines-of-binary--1s-and-0s--flowing-towards-a-futuristic-repos.webp" medium="image"/><content:encoded><![CDATA[<img src="https://www.codeandcompass.net/content/images/2024/12/DALL-E-2024-12-16-13.27.07---A-conceptual-digital-illustration-depicting-the-movement-of-code-represented-as-glowing-lines-of-binary--1s-and-0s--flowing-towards-a-futuristic-repos.webp" alt="How to Push Your Helm Charts to a Docker (OCI) Registry"><p><strong>Introduction</strong><br>Helm has become the go-to package manager for Kubernetes, simplifying the deployment and management of complex applications by bundling all the required Kubernetes manifests into easily distributable &#x201C;charts.&#x201D; Traditionally, Helm charts have been hosted in dedicated Helm repositories or stored as files in version control systems. However, with the advent of Helm 3 and its support for OCI (Open Container Initiative) specifications, you can now store and distribute Helm charts via Docker (OCI) registries&#x2014;just like container images.</p><p>In this post, we&#x2019;ll walk through the process of preparing your Helm chart, authenticating to a Docker registry, and using the OCI support in Helm 3 to push your chart to a Docker-based registry such as Docker Hub or a private OCI-compatible registry.</p><hr><p><strong>Prerequisites</strong></p><ol><li><strong>Docker Registry Access</strong>:<br>You&#x2019;ll need access to a Docker registry (such as Docker Hub, Amazon ECR, or a self-hosted OCI-compatible registry) and proper permissions to push images.<ul><li>For Docker Hub, you need a Docker Hub account.</li><li>For self-hosted registries, ensure you have a reachable endpoint and credentials.</li></ul></li></ol><p><strong>OCI Support Enabled (If Needed)</strong>:<br>For Helm versions prior to 3.8, you might need to explicitly enable the experimental OCI feature:</p><pre><code class="language-bash">export HELM_EXPERIMENTAL_OCI=1
</code></pre><p>From Helm 3.8 onwards, OCI support is stable and enabled by default.</p><p><strong>Helm 3.7 or Later</strong>:<br>OCI support in Helm started as an experimental feature but is now stable in Helm 3.7+. Make sure you have a recent version:</p><pre><code class="language-bash">helm version
</code></pre><p>If you&#x2019;re running an older version, you can <a href="https://github.com/helm/helm/releases?ref=codeandcompass.net">download the latest release</a>.</p><hr><p><strong>Step-by-Step Guide</strong></p><h3 id="step-1-prepare-your-helm-chart">Step 1: Prepare Your Helm Chart</h3><p>If you don&#x2019;t already have a Helm chart, you can create one using:</p><pre><code class="language-bash">helm create mychart
</code></pre><p>This command generates a starter chart in the <code>mychart</code> directory. Inside, you&#x2019;ll find a <code>Chart.yaml</code>, <code>values.yaml</code>, templates, and other configuration files. Adjust the chart&#x2019;s metadata, deployment configurations, and values as needed for your application.</p><h3 id="step-2-package-the-helm-chart">Step 2: Package the Helm Chart</h3><p>Before pushing to a registry, you need to package your Helm chart into a <code>.tgz</code> archive. The <code>helm package</code> command does this easily:</p><pre><code class="language-bash">cd mychart
helm package .
</code></pre><p>This command creates a file named <code>mychart-&lt;version&gt;.tgz</code> in the current directory. The version is pulled from <code>Chart.yaml</code>.</p><h3 id="step-3-enable-and-use-oci-support-if-required">Step 3: Enable and Use OCI Support (If Required)</h3><p>If you&#x2019;re using Helm 3.7 or earlier, you must enable OCI support by setting an environment variable:</p><pre><code class="language-bash">export HELM_EXPERIMENTAL_OCI=1
</code></pre><p>For Helm 3.8 and later, OCI support is automatically enabled, and you can skip this step.</p><h3 id="step-4-log-in-to-the-docker-registry">Step 4: Log In to the Docker Registry</h3><p>To push your Helm chart, you need to be authenticated to the registry. For Docker Hub, you can use:</p><pre><code class="language-bash">helm registry login registry-1.docker.io \
  --username &lt;your-username&gt; \
  --password &lt;your-password&gt;
</code></pre><p>If you&#x2019;re using another OCI-compatible registry (for example, <code>ghcr.io</code> for GitHub Container Registry or a private registry), adjust the URL accordingly:</p><pre><code class="language-bash">helm registry login ghcr.io \
  --username &lt;your-username&gt; \
  --password &lt;your-personal-access-token&gt;
</code></pre><p>Successful login means Helm (and underlying tooling) have stored your credentials for subsequent operations.</p><h3 id="step-5-push-the-helm-chart-to-the-registry">Step 5: Push the Helm Chart to the Registry</h3><p>With OCI, Helm treats the registry similarly to how you would push images. The syntax involves specifying an <code>oci://</code> prefix. For example, to push your <code>mychart-&lt;version&gt;.tgz</code> chart to Docker Hub under your account <code>myusername</code>:</p><pre><code class="language-bash">helm push mychart-&lt;version&gt;.tgz oci://registry-1.docker.io/myusername/helm-charts
</code></pre><p><strong>What&#x2019;s happening here?</strong></p><ul><li><code>oci://</code> indicates that you&#x2019;re using an OCI registry.</li><li><code>registry-1.docker.io/myusername/helm-charts</code> is the path to the repository within the Docker Hub. You can think of it like <code>myusername/helm-charts</code> in Docker Hub&#x2019;s naming convention.</li></ul><p>If everything is correct, Helm will push the chart and give you a confirmation message. Your chart is now stored in the Docker registry, just like a container image.</p><h3 id="step-6-verify-the-chart-in-the-registry">Step 6: Verify the Chart in the Registry</h3><p>After pushing, you can verify that the chart exists in the registry. While Helm doesn&#x2019;t have a direct &#x201C;list&#x201D; command for registries at this point, you can use <code>helm pull</code> to confirm retrieval:</p><pre><code class="language-bash">helm pull oci://registry-1.docker.io/myusername/helm-charts/mychart --version &lt;chart-version&gt;
</code></pre><p>If the pull succeeds, you&#x2019;ve confirmed that the chart was successfully stored and can be retrieved.</p><hr><p><strong>Additional Tips and Best Practices</strong></p><ol><li><strong>Versioning and Tagging</strong>:<br>Treat your Helm charts like container images&#x2014;use semantic versions and consider appending additional tags for clarity. This makes it easier to manage, roll back, and track deployments over time.</li><li><strong>Automated CI/CD Pipelines</strong>:<br>Integrate chart packaging and pushing into your CI/CD pipeline. For instance, a GitHub Actions workflow can automatically package and push the chart upon merging to the main branch.</li><li><strong>Use Private Registries for Proprietary Software</strong>:<br>If your application and charts are not open-source or public, consider using a private OCI registry. Many cloud providers offer private container registries with native OCI support.</li><li><strong>Access Control and RBAC</strong>:<br>Manage access to your Helm charts using the registry&#x2019;s authentication and authorization mechanisms. This ensures that only trusted team members or CI/CD agents can push or pull charts.</li></ol>]]></content:encoded></item></channel></rss>