A Big Jump in Automating Human Thinking

ChatGPT and other large language models (LLMs) are about to completely upend tech. The initial questions about integrating LLM AI can quickly turn into the realization that it can help with basically any business process.

“How can we improve our documentation with AI?”

“We can ask the AI to improve the wording, since it knows how to write coherently and all.”

“Wait, why not have the AI write the docs from the get-go? We can just feed our internal content and source code to the AI, and with some finagling, it could describe what the product does.”

“Heck, since the AI can read and write code, we could just have the AI read our tickets and suggest code changes to fix our bugs in the first place.”

“Have it read the source code, find bugs, make the tickets, then suggest the actual code that fixes them, why not? We could set up a testing environment where its automated code suggestions are tested.”

“Make the pipeline complete and have it document what it changed too.”

“Hold on… how much do we even need to provide customer-facing docs? We could just have a secured data dump of all our internal content and code, and provide an AI interface for the customer to ask questions. This way, the customer could just ask for a guide to do X, and the AI would provide it based on the latest raw data about our product(s). It could curate it to any customer’s preferences, level of tech expertise, language, etc.”

“Why would the other company need integration guides either? They could just input some of their source code and what they’re trying to do into our AI interface, and our AI could write integration code for them. Again, have it tested in an isolated environment. Or even, the customer could have their own AI that talks to our AI about the best way to integrate.

“The AI could report back about how to improve our business based on the customers' experiences and questions too. We could automate the user stories analysis and have it provide insights to us.”

It’s madness.

In just a short batch of ideas, we go from tasking the AI with …

All of this and more is possible, however rudimentarily, with the latest LLMs. So many job functions will be augmented by or replaced with the function of a “prompt engineer” - someone with specialized knowledge to control the inputs to the AI (and then have it do the real work - the thinking / analysis part). What used to take a week of a team working on an issue together may only take a few minutes for an AI.

Don’t get me wrong, this all requires humans in the loop to steer the AI’s power, but it will drastically cut down the amount of effort required to get things done. This is not necessarily a bad thing, it’s just that the changes that seem to be right here at the horizon already look so massive in scope, and we’re hardly out of the starting gates.

We’re about to take a big leap in automating human thinking. Not just automating calculations like how computers did, but actual analysis and thinking. I see a future where we’ll be doing more second-order analysis instead of looking at raw code and drafts. The AI can do the first sweep and handle the low-level stuff. As this tech gets better, that higher level that humans are needed for will probably creep higher and higher.

Crazy times!

Daniel