Blog

Ideas, notes,
and field reports.

Technical SEO, web development, AI workflows, OSINT, and what it actually looks like to build systems that work — written from the field, not the boardroom.

The 5-Minute Technical SEO Audit I Run on Every New Site

I've seen hundreds of sites that look like Ferraris on the outside but have lawnmower engines under the hood. You don't need a $500/month tool. Here's how I do it in five minutes with nothing but a browser and free tools.

Read the article →

Claude API + Python: My Actual Automation Workflow

Not a tutorial. A real account of how I integrated Claude API calls into Python scripts to handle content structuring, schema generation, and research summaries — with the failures included.

OSINT Without Getting Burned: A Practical Intro

Open-source intelligence is powerful and it's easy to misuse. Here's how I approach investigations ethically — the methodology, the tooling, and the guardrails I set for myself before I start.

DirectAdmin vs cPanel: What I Actually Use and Why

After 20 years of managing hosting environments, I switched from cPanel to DirectAdmin and haven't looked back. Here's the honest comparison — cost, UX, performance, and the tradeoffs.

Music-Driven Work: How Creative Energy Changes Output

There's a measurable difference in the quality of my work depending on what's playing. I've been tracking this informally for years — here's what I've noticed about music, focus, and building.

What 20 Years of Building Websites Taught Me About Starting Over

I've rebuilt williamlodge.com more times than I can count. Each iteration is better than the last — not because the tools are better, but because the thinking is clearer. Here's the pattern.

The VPS Hardening Checklist I Run on Every New Server

fail2ban, ufw, SSH key auth, unattended-upgrades — the minimum viable security setup for any Linux server, done in order, explained plainly. No fluff, no hand-waving.

Always Relevant

The 5-Minute Technical SEO Audit

Run this on every site, every time.

VPS Hardening Checklist

Minimum viable Linux server security.

DirectAdmin vs cPanel

The honest comparison after 20 years.

Latest Notes

Claude API + Python Workflow

March 2026 — real automation, not demos.

OSINT Without Getting Burned

Feb 2026 — practical intro to the discipline.

Music-Driven Work

Jan 2026 — creative energy and output quality.

Get new posts

No newsletter platform. No tracking pixel. Just an email when something new publishes.

← Back to Blog

The 5-Minute Technical SEO Audit I Run on Every New Site

"I've been a freelancer for decades. In that time, I've seen hundreds of sites that look like Ferraris on the outside but have lawnmower engines under the hood. Most 'SEO Experts' will try to sell you a $2,000 audit that takes three weeks. I'm going to show you how I do it in five minutes with nothing but a browser and a couple of free tools."

This isn't about keywords or vibes. This is about whether Google's crawlers can actually read your code — and whether your server is actively fighting against your rankings. These are the five checks I run before I do anything else on a new site. In this order.

01

The Google "Health Check" — The site: Operator

The first thing I do is check what Google actually thinks exists on your domain.

The Check: Go to Google and type site:yourdomain.com

What to look for: Does the number of results match your actual page count? If you have a 5-page portfolio but Google shows 400 results, you've either been hacked or your "Hello World" demo content is indexed. Both are bad.

📸 [SCREENSHOT: Example of a clean vs. cluttered 'site:' search result — 5 results vs. 400 results]

The Fix: Use a noindex meta tag on utility pages, or delete the junk entirely. Go to Google Search Console and request removal of anything you can't delete.

02

Core Web Vitals — The Speed Trap

Google doesn't just care if you're fast — they care if you're stable and predictable. Core Web Vitals measure three things: how fast the biggest element loads (LCP), how much the layout shifts while loading (CLS), and how quickly the page responds to input (INP).

🔧 Tool: PageSpeed Insights — pagespeed.web.dev (free, official Google)

What to look for: LCP over 2.5 seconds means your visitors are leaving before the page loads. A CLS score above 0.1 means your layout is jumping around while loading, which destroys trust.

The Fix: Compress your images (WebP format, under 200KB for most). Implement server-side caching. If you're on a VPS, check your PHP memory limits — an underpowered server is the silent killer of CWV scores.

03

The Robot Barrier — robots.txt

I've seen developers accidentally leave the "Discourage search engines" box checked in WordPress for months after launch. The client wonders why their traffic is zero. This is why.

The Check: Navigate to yourdomain.com/robots.txt

What to look for: Does it say Disallow: /? If yes, you've told every search engine crawler to leave. You are invisible.

The Fix: Change it to Allow: / and make sure your sitemap URL is listed at the bottom.

04

Schema Validation — The Translator

Structured data is how you talk to both search engines and large language models. If it's broken, they fall back on guessing — and they often guess wrong.

🔧 Tool: Schema.org Validator — validator.schema.org (free)

What to look for: Red errors. Warnings are acceptable. Errors mean your structured data is being ignored entirely.

The Fix: Correct the JSON-LD syntax in your <head> block. Start with Person or WebSite on the homepage and Article on blog posts.

05

Mobile Usability — The Real World

Most of your clients in Boulder are looking at your site on an iPhone while walking down Pearl Street. If your site doesn't work on mobile, your ranking doesn't matter.

The Check: Open your site on your phone and try to click two links that are close together. Google also has a Mobile Usability report in Search Console.

What to look for: "Clickable elements too close together" or horizontal scrolling. Both are Google ranking signals — not just UX annoyances.

The Fix: Minimum touch target size is 44×44px. Use padding, not font-size changes, to hit that target.

Technical SEO isn't magic — it's maintenance. Run this check the day a site launches and once a month after that. You'll stay ahead of 90% of the competition, who either never check or wait until something breaks.

The goal isn't a perfect score. It's a site that Google can read, trust, and rank. These five checks are the foundation. Everything else — keywords, content, backlinks — is built on top of this.

← Back to Blog

Claude API + Python: My Actual Automation Workflow

"Every tutorial shows you the happy path. Here's the path I actually walk — including the dead ends, the rate limit walls, and the one script that saved me six hours a week once I got it right."

I've been using Claude's API as a core part of my workflow since mid-2025. Not for generating blog posts or summarizing PDFs — I mean actually integrating it into Python scripts that run on a schedule, produce structured output, and feed into other systems. This is what that looks like in practice.

01

The Setup: API Key, Environment, Basic Call

I keep my API key in a .env file and load it with python-dotenv. I never hardcode it — not even in a local script that "will never leave this machine." That discipline saved me once when I accidentally pushed a test repo.

pip install anthropic python-dotenv # .env ANTHROPIC_API_KEY=sk-ant-...

The basic call is two files: a config.py that initializes the client, and the actual script. I separate them early even for small projects — it pays off when the project grows.

02

The Use Case That Actually Stuck: Schema Generation

I build a lot of static HTML sites. Every page needs JSON-LD schema markup — and writing it by hand for 30 pages is the kind of work that eats an afternoon without producing anything interesting. So I wrote a script that takes a page's title, description, URL, and type, and returns a complete, valid schema block.

The key insight: prompt engineering matters more than Python skill here. The difference between a schema block that validates and one that doesn't comes down to how precisely you specify the output format. I ended up with a system prompt that includes a full example schema block and explicit instructions about which fields are required vs. optional.

📸 [SCREENSHOT: script output — clean JSON-LD block for a BlogPosting schema, validated green in schema.org validator]
03

Handling Rate Limits Without Losing Work

The first time I ran a batch job against a list of 80 URLs, I hit a rate limit around item 60 and lost everything. My fault — I had no error handling. The fix was a retry loop with exponential backoff and a results file that appends on success rather than waiting for the end.

Now every batch script I write saves results incrementally. If it fails at item 60, it picks up at item 61 on the next run. This sounds obvious in retrospect. It wasn't at 2am when I was rebuilding a site schema set from scratch.

🔧 Library: tenacity — clean retry decorators for Python, handles backoff logic automatically
04

Research Summaries: The Workflow That Replaced Three Hours

For the HomelessBoulder.com resource pages, I needed to research and summarize dozens of local service providers — addresses, hours, eligibility, contact info. I was doing this manually from government sites and nonprofit pages. It was slow and error-prone.

I built a pipeline: fetch the page, extract the relevant text, send it to Claude with a structured prompt, get back a JSON object with the fields I need. The JSON goes directly into the page template. Total time per provider dropped from 15 minutes to 90 seconds.

The failure modes: some sites block automated requests (solved with delays and a real user-agent header). Some have PDFs that are actually scanned images (not solvable without OCR, which I haven't added yet). And Claude occasionally fills in fields it can't find with plausible-sounding but wrong data — so I always verify the output before it goes live.

05

The Failure I Still Think About

I tried to automate the entire content structure of a new site — meta descriptions, H1s, section copy — from a single intake brief. It worked, in the sense that it produced text. But the output was technically correct and completely characterless. It passed every SEO check and felt like reading a terms-of-service agreement.

The lesson: Claude is exceptional at structured output, research synthesis, and pattern application. It's a poor substitute for voice. I use it now to handle the architecture — the schema, the structured data, the repetitive metadata — and I write the actual words myself. The combination is better than either alone.

None of this is magic. It's just careful engineering applied to a powerful API. The scripts I use most are the boring ones — they do one thing well, handle their errors gracefully, and run quietly in the background while I work on something else.

If you're a developer considering adding AI calls to your Python workflow, start with one specific, repetitive task you already know how to do manually. Automate that. Learn the failure modes. Then expand.

← Back to Blog

OSINT Without Getting Burned: A Practical Intro

"The most dangerous thing about OSINT isn't the tools — it's the researcher's confidence. The faster you move, the easier it is to confuse 'publicly accessible' with 'mine to act on.' I set guardrails before I open a single tab."

Open-source intelligence is the discipline of gathering and analyzing publicly available information to produce actionable insights. You don't need special access, credentials, or software. You need method. Here's the method I use, and the ethics I've built into it from the beginning.

01

Define Your Question Before You Start

The single most important thing you can do before an OSINT investigation is write down a specific question in plain English. Not "research this person" — that's a direction, not a question. Something like: "Does the entity behind this website have a verifiable physical presence in Boulder, CO?"

A specific question tells you when to stop. Without it, OSINT investigations expand endlessly into adjacent interesting facts that have nothing to do with what you actually needed. Scope creep in research is how investigations become rabbit holes and how researchers get lost.

02

The Ethical Guardrails I Set Every Time

Before I open any tool, I ask three questions: Is this genuinely public? (Not just technically accessible — actually intended to be public.) Is my purpose defensive or constructive? (Verifying a vendor, protecting a community, documenting a pattern — not satisfying curiosity about a private individual.) Would this hold up if the subject could see exactly what I was doing?

If the answer to any of these is no or uncertain, I stop. OSINT that starts in an ethically grey area almost always ends somewhere worse.

🔒 Resource: OSINT Curious — ethical OSINT practice and case studies for investigators
03

The Starting Stack (Free, Browser-Based)

You don't need Maltego or a paid intelligence platform to do useful research. My starting stack is: Google advanced operators for indexed content (site:, inurl:, filetype:), ViewDNS.info for domain infrastructure, Have I Been Pwned for breach exposure, and the Wayback Machine for historical snapshots.

These four tools answer 80% of the research questions I encounter. The specialist tools — geolocation, social media pivoting, image verification — only come out when the basics don't resolve the question.

04

Document Everything As You Go

Screenshots with timestamps. URLs with retrieval dates. The version of a page you saw, not just the current version. OSINT evidence degrades — websites change, social media posts disappear, WHOIS records get updated. If you don't capture it when you find it, you may not be able to find it again.

I use a simple markdown file for each investigation. Timestamped entries, screenshot filenames, the exact query that surfaced each piece of information. It's not glamorous. It's the difference between research you can defend and research you can't.

05

Knowing When You're Done

Go back to your original question. Can you answer it with the evidence you've collected? Then you're done. OSINT investigations don't have a natural stopping point — the data always suggests another pivot, another rabbit hole, another interesting thread. Discipline is knowing that interesting isn't the same as necessary.

If you can't answer your original question after a reasonable effort, the honest finding is "insufficient public evidence to conclude." That's a legitimate answer. Document it and stop.

OSINT is a legitimate, valuable discipline. It's also one that attracts people who want to feel powerful with publicly accessible tools, and that combination can cause real harm to real people. The methodology I've described isn't a constraint on effectiveness — it's what makes the work defensible, repeatable, and worth doing.

For a deeper look at tools, workflows, and the ethics framework I've built out, see the full LodgeOSINT hub.

← Back to Blog

DirectAdmin vs cPanel: What I Actually Use and Why

"I ran cPanel for twelve years. Then cPanel doubled its pricing in 2019 and I had to make a decision. I made it. I haven't looked back, but I can still tell you exactly what you give up."

This is a practical comparison from someone who manages their own VPS, runs multiple client and personal properties, and does not have an enterprise budget. If you're running a single shared hosting account, this article isn't for you. If you're managing a VPS with multiple domains, read on.

01

Cost: The Number That Changed Everything

In 2019, cPanel moved to per-account pricing. A VPS with 30 hosted accounts went from a flat ~$15/month licensing fee to ~$45/month overnight. For a solo freelancer managing a few client sites alongside personal projects, that's a real hit with no corresponding increase in value.

DirectAdmin's current pricing sits around $2/month for a standard license. I'm not going to pad this: the cost difference is the primary reason I switched. Everything else I'm about to tell you is secondary to that number.

02

The UX Gap: cPanel Wins, But Not by Much Anymore

cPanel's interface is more polished. The one-click installers are better integrated. The file manager is slightly more intuitive. If you're a client who needs to occasionally log in to update a plugin, cPanel is easier to hand off without training.

DirectAdmin has improved significantly. The Evolution skin is clean and fast. The core tasks — creating accounts, managing DNS, configuring email, running backups — all work. It's not as visually refined, but it's competent. The things that would make a client panic in DirectAdmin are the things they should be calling me about anyway.

03

Performance: DirectAdmin Is Lighter

DirectAdmin uses fewer server resources. On a mid-tier VPS, this is measurable — the control panel itself doesn't eat into the RAM and CPU that should be serving your sites. cPanel has always been a heavy application. DirectAdmin is built leaner.

For the sites I host — mostly static HTML, some WordPress installs, a few Python apps — this matters. A resource-efficient control panel means more headroom for the actual workload.

04

Where cPanel Still Wins

Ecosystem. cPanel has better plugin support, wider documentation, and more third-party integrations. If you're running a reseller hosting business with 100+ clients who need to self-manage their accounts, cPanel's Reseller center is more fully featured. And if you need to hand off a server to a client who expects to see a familiar interface, cPanel is what they've probably seen before.

Staging environments. cPanel's Softaculous has better native staging support for WordPress installs. DirectAdmin has it via Softaculous as well, but the experience is slightly rougher.

05

The Actual Recommendation

If you're a developer managing your own VPS for your own work: use DirectAdmin. The cost savings are real, the performance is better, and the functionality is sufficient for everything you'll actually need.

If you're setting up hosting for clients who need to manage their own accounts with minimal support from you: consider cPanel, budget for the licensing, and build it into your pricing.

If you're on shared hosting through a provider: you don't get to choose. Use what they give you and focus on the work.

The switch from cPanel to DirectAdmin took me about a day of migration work. Three years later, I've never had a situation where I needed something DirectAdmin couldn't provide. The cost savings have paid for themselves many times over.

← Back to Blog

Music-Driven Work: How Creative Energy Changes Output

"I spent twelve years managing live audio and broadcast signal chains. Sound isn't background for me — it's structural. When I figured out how to use that deliberately in development work, something shifted."

This isn't a productivity hack. It's an observation I've made over a long time that I've never written down before. The music playing during a work session affects not just how fast I work, but what I make.

01

The Three Work Modes and What They Require

I've noticed my work falls into roughly three modes: deep architecture work (system design, complex debugging, writing that requires real thought), execution work (known tasks with known solutions — build this form, write this schema, update these styles), and creative work (design decisions, problem framing, anything requiring a blank-page start).

Each mode calls for different audio. Deep architecture work needs low-information-density music — ambient, minimal lyrics, long instrumental passages. Execution work tolerates higher-energy music with more structure. Creative work, counterintuitively, often works best with music I know well enough to not hear.

02

What "Flow State" Actually Sounds Like

There's a specific physiological state when deep work is going well — breath slower, physical awareness reduced, hours compressing. I used to stumble into it accidentally. After years of paying attention, I can reliably trigger it with the right audio environment.

For me, that's Boards of Canada, Four Tet, or long ambient sets. The analog warmth and slight temporal ambiguity in that music seems to disengage the self-monitoring part of the brain that interrupts flow. The science on this is real — tempo, predictability, and the presence or absence of lyrics each have measurable effects on cognitive performance by task type.

03

The Production Management Connection

At Cox Enterprises, I spent years thinking about signal chains — the path a signal takes from source to output, and every point along the way where it can be degraded or improved. Mixing a live broadcast is about removing interference and preserving signal integrity under pressure.

I think about my work sessions the same way now. The "signal" is the thought I'm trying to complete. The "interference" is everything that interrupts it — notifications, discomfort, cognitive load from the wrong kind of audio. Managing that interference is part of the craft, whether the craft is audio or code.

04

The Practical Takeaway

I'm not telling you to listen to what I listen to. I'm suggesting that if you work in a focused discipline and haven't paid attention to how different audio environments affect your output — start paying attention. It's free information, and the signal is strong once you start looking for it.

Spend two weeks tracking what was playing during your best work sessions and your worst. The pattern will probably surprise you.

The full version of this — the playlists, the artist spotlights, and the connection between production management and development methodology — lives on the Music page if you want to go deeper.

← Back to Blog

What 20 Years of Building Websites Taught Me About Starting Over

"My first website was in 1998. I've rebuilt williamlodge.com more times than I can count. Each version looked better than the last — but the improvements that mattered weren't visual. They were structural, and they came from finally understanding what the site was actually for."

This is a piece about iteration and clarity — specifically about how many times you have to start over before you understand what you're building and why. I've started over enough times to see the pattern clearly.

01

The Early Versions: Showing Everything

The first versions of my personal site tried to contain everything I'd ever done. Broadcast engineering, AV production, web development, music, audio work, consulting — all of it, presented without hierarchy. The implicit argument was "I can do everything," which communicates nothing useful to anyone who might want to hire me.

This is the first-version mistake almost everyone makes. The site is a mirror for the builder's ego, not a tool for the visitor's decision. Recognizing that distinction took me years.

02

The Middle Versions: Over-Designed, Under-Thought

The next phase was design-led rebuilds. I'd learn a new technique — CSS grid, Intersection Observer animations, parallax effects — and rebuild the site to showcase what I'd just learned. These versions looked impressive. They had weak information architecture and served primarily to demonstrate that I could build impressive-looking sites.

The lesson: a website that demonstrates your technical skills is not the same as a website that serves your business goals. For most freelancers, those are two different things that need to be kept in productive tension, not collapsed into one.

03

The Clarity Moment: What Is This For?

At some point I stopped asking "what do I want to put on my site" and started asking "what decision do I want a visitor to make, and what do they need to make it?" That shift in framing changes everything downstream — the content, the structure, the calls to action, the visual hierarchy.

For williamlodge.com, the answer is: a potential client or collaborator needs to understand what I do, see evidence that I do it well, and have a clear path to get in touch. That's three things. Every element of the site should serve one of those three things or it shouldn't be on the site.

04

The Current Version: Architecture Before Aesthetics

The current site was built architecture-first. Information hierarchy. URL structure. Content types. Navigation logic. I had all of that designed before I wrote a single line of CSS. The visual design came after — and because the structure was solid, the visual decisions were mostly obvious.

This is the opposite of how I built sites for the first decade. And it's faster. A week of structural thinking eliminates three weeks of redesigns.

05

What "Starting Over" Actually Means Now

I still start over — but now "starting over" means revisiting the fundamental question rather than rebuilding the CSS. When the site stops serving its purpose, I ask again: what is this for? Who is it for? What decision am I trying to enable?

The answers change as the work changes. That's fine. The discipline of asking the question clearly, before touching any code, is what keeps the iterations productive rather than circular.

Twenty years of starting over has produced something I'd call productive humility — the understanding that the current version is always provisional, that it will need to be rebuilt, and that the rebuild will be better if I do the thinking clearly first.

The 1998 site is gone. Everything I learned from building it is still here.

← Back to Blog

The VPS Hardening Checklist I Run on Every New Server

"A fresh VPS is an open door. The default configuration exists to make the server accessible, not secure. These are the first six things I do before I deploy anything — in this order, without skipping steps."

This checklist assumes an Ubuntu/Debian VPS, root SSH access from your provider, and that you're comfortable in a terminal. It's the minimum viable hardening setup. It is not comprehensive security — it is the baseline that makes you a harder target than 80% of the servers on the internet.

01

Create a Non-Root User and Lock Root Login

Never operate as root after initial setup. Create a new user with sudo privileges immediately:

adduser yourname usermod -aG sudo yourname

Then in /etc/ssh/sshd_config, set PermitRootLogin no. Reload SSH. Test your sudo user works in a second terminal before closing the root session. This is the step most people skip and then regret.

02

SSH Key Authentication Only

Disable password authentication for SSH entirely. Generate a key pair on your local machine, copy the public key to the server, then set PasswordAuthentication no in sshd_config.

ssh-keygen -t ed25519 -C "yourname@yourmachine" ssh-copy-id yourname@your.server.ip

ed25519 keys over RSA — smaller, faster, more resistant to brute force. Once key auth is working and tested, password auth goes off permanently.

03

ufw — The Minimal Firewall Setup

Ubuntu's Uncomplicated Firewall gets you a working firewall in three commands. Deny everything by default, then allow only what you actually need:

ufw default deny incoming ufw default allow outgoing ufw allow OpenSSH ufw allow 80/tcp ufw allow 443/tcp ufw enable

If you're running a mail server or database — add those ports explicitly. Don't open anything you're not actively using. Every open port is an attack surface.

🔧 Verify your rules with: ufw status verbose
04

fail2ban — Blocking Brute Force Automatically

fail2ban watches your auth logs and automatically bans IPs that fail too many login attempts. Install it, enable the sshd jail, and it runs quietly in the background:

apt install fail2ban systemctl enable fail2ban systemctl start fail2ban

The default configuration bans an IP for 10 minutes after 5 failed attempts. I increase this to 1 hour after 3 attempts on any server that's publicly visible. Check banned IPs with fail2ban-client status sshd.

05

unattended-upgrades — Security Patches on Autopilot

Security patches matter. The compromise between "always current" and "stable for production" is unattended-upgrades configured for security-only updates:

apt install unattended-upgrades dpkg-reconfigure --priority=low unattended-upgrades

This installs security updates automatically without touching feature updates or major version bumps. Check its log at /var/log/unattended-upgrades/ monthly to confirm it's running.

06

Change the Default SSH Port (Optional, But Worth It)

This is security through obscurity — it doesn't stop a determined attacker, but it dramatically reduces noise in your auth logs. Change Port 22 to something non-standard (e.g. 2222, or any port above 1024 that you'll remember). Update your ufw rules to allow the new port and deny 22.

The night I changed my SSH port, auth failure attempts in my logs dropped from ~400/day to near zero. Not more secure in theory, much quieter in practice.

This takes about 30 minutes on a fresh server. It's not glamorous work. But it's the kind of maintenance that prevents the 2am panic of finding your server compromised and your client sites down.

Do it every time. In this order. Don't skip steps.