1. Standardize URLs so you never think about routing

Pick one convention and bake it in: • Option A: filename becomes URL slug automatically (my-new-post.md → /posts/my-new-post/) • Option B: explicit slug: in front matter if you want custom titles

Once set, you never touch it again.

  1. Add a one-command script

In package.json, add something like: • npm run new "my-title" creates the post file with today’s date + front matter (optional but huge time saver) • or a tiny bash script

(If you want, paste your repo structure and I’ll give you the exact script and where to place it.)

The answer is actually quite obvious: it’s not in the training data.

I’m a big believer that AI can perform at least 50% of jobs in every enterprise right now, however, it requires that tons of software scaffolding be created first.

The question is not can AI do XYZ job. The point of this post is to explain why LLM’s aren’t able to perform enterprise tasks out of the box (without additional software scaffolding).

Again, the reason is because that enterprise task you’re trying to perform is not in the training data.

Let’s look at the main sources of data for AI models:

The internet Question /answer forums Pdfs Blog posts Research papers News articles All books,textbooks written in humanity

If you look at these 2 data sources, none of them cover how to perform enterprise specific tasks. All data in enterprises by nature are confidential. Meaning that they are not posted/shared on the internet or documented in publicly available books. You might ask: well… aren’t there textbooks written on how to perform certain tasks? I find this to be quite unlikely. Find me a book on how to trade stocks and make money. What “works” in that book, likely doesn’t work in “real life” at “real companies”. Even if there were to exist a small number of textbooks teaching you how to do a task. Every enterprise has their own way of doing something. That textbook that teaches you generally how to do a task, would not align with an enterprises’ specific practices. And therefore, the AI still would not work “out of the box” without extra scaffolding.

Low probability LLMS are probabilistic. It picks the next token that is most likely to occur. Even if there is a super small set of data that exists in the training data on how to perform a specific task, it would likely get blotted out by much more common data that exists on reddit and other question/answer forums. Meaning that this knowledge in the LLM is never surfaced to the user.

Because these documents are confidential, the AI WOULD HAVE NEVER SEEN THESE DOCUMENTS BEFORE.