AI literacy for project practitioners (Part 2): the reading list I used

AI literacy for project practitioners (Part 2): the reading list I used
Photo by Patrick Tomasso / Unsplash

In the first part of this series, I shared the skill matrix I’m using to prepare for delivering AI-powered products. This second post I share the sources I used to work through that matrix.

Before diving in, a few important caveats.

This is not the definitive reading list. It is highly subjective and reflects what worked for me. Some articles are from 2023 or 2024. I still included them if I felt the core ideas remain relevant. The explicit goal was to get a broad and practical overview within one or two weeks, without buying a book or going deep into academic papers.

Some of the content comes from commercial vendors. While these companies naturally promote their tools or services, I found the educational value of the material high enough to recommend it anyway. I have no affiliation with any of them.

Think of this list as a guided path, not a curriculum.


Fundamentals: understanding what you are dealing with

How do large language models work?

If you read only one piece on LLM fundamentals, make it this one:

This was by far the most helpful article for me as a non-technical practitioner. It explains what LLMs do, why they work and where the limits are, without drowning you in math or jargon.

Once that mental model clicked, I went deeper:

I originally tried to start with this article, but it was too technical for me at that point. After reading the first article, however, this one added a lot of useful detail and nuance.

If you prefer video over reading:

I haven’t watched the full three-plus hours yet, but even the first hour gives a very solid understanding. If the rest is of similar quality, this is an excellent alternative to reading.

What are agents?

Once you understand basic LLM behaviour, agents naturally come up.

This is a good, high-level explanation of agent concepts without jumping straight into implementation details.

What is RAG?

Retrieval Augmented Generation (RAG) turned out to be one of the most important concepts for me. I only read this later in my journey and wished I had done so earlier.

Despite being published by a vector database vendor, this is an excellent explanation and a strong entry point into RAG and related concepts.


Bonus material: Software 3.0

While reading through various papers and articles, I stumbled upon a presentation explaining the idea of Software 3.0.

Highly recommended if you want a high-level, conceptual view of how software development is changing with AI.


Challenges of bringing LLM-based applications to production

Understanding fundamentals is one thing. Bringing AI into production is another.

Two articles helped me understand why LLM-based projects behave differently from traditional software projects:

This article gives a great overview of architectural and operational patterns, especially around evaluation. It helped me connect theory with delivery reality.

This nicely complements the article above. As a side note: the author also wrote a book called AI Engineering. I’m currently reading it and, based on the first chapters, would recommend it if you want to go deeper.


Testing and evaluation: where classic approaches break

Testing was one of the biggest conceptual shifts for me.

This article explains why LLMs can be used to evaluate other LLM outputs and why this is often the only scalable approach.

You don’t need to read this end to end. Skimming it gives a good feel for how evaluation frameworks think about quality in AI systems.


Bonus material: how investors think about AI

Halfway through my research, I noticed that major VC firms publish surprisingly good AI material.

There’s no 2025 piece yet, but together these articles provide a useful historical perspective on how the AI market evolved over the last few years. Sequoia in general has quite a few good pieces on their web page: https://sequoiacap.com/stories/

This is a much more structured and educational collection than my list. If you prefer a more systematic approach, this is worth exploring.


Monitoring and tracing: making failures visible

While reading about LLMOps, I struggled to imagine how monitoring and tracing actually look in practice.

Two short product demos helped make this concrete:

You don’t need to care about the tools themselves. The value is in understanding what is traced and why.


From demo to production

One theme kept repeating: demos are easy, production is hard.

A very clear explanation of why many AI projects stall after the demo phase.

A short introduction to EDD as one possible way out of “AI demo hell”.


The economic potential of AI

To understand the “why”, I wanted a high-level view of AI’s economic impact.

The report is from 2023 and the quantitative numbers will age. The use cases and patterns, however, still feel very current.


Bonus material: the productivity J-curve

I noticed that AI is not transforming corporate life as fast as some headlines suggest.

This paper offers a solid explanation: productivity often dips before it accelerates when new technology is introduced.


Outcome-based pricing

AI doesn’t just change products. It changes pricing models.

A good, short overview of the idea.

Especially useful to understand the operational and accounting challenges behind the concept.


Regulation: the unavoidable reality

Finally, regulation.

Wikipedia provides a surprisingly good first overview.

A clear explanation of why the US situation is more fragmented.

This article helped me connect the dots across regions.


Closing thoughts

This list is not meant to be exhaustive. It reflects one learning path that worked for me while building the skill matrix from the first post.

If you follow a similar approach, my advice would be simple: define your target state first, then select material deliberately. Otherwise, it’s very easy to read a lot and still feel unprepared.

If you have suggestions for sources that fit this mindset, I’d be happy to compare notes.