- fractionl.ai
- Posts
- Proactivity in AI agents: another step to replacing people?
Proactivity in AI agents: another step to replacing people?
Agentic AI can now take action instead of just replying, and automate multiple steps of a complex task. Does that put another nail in the coffin for human jobs?
Gen AI’s first chapter looked a lot like search: ask your question, and the computer answers it. With the latest slew of updates (GPT model o3 and the like), we’re stepping into a slightly new act - proactive AI tools - where models schedule meetings, watch dashboards, nudge you when a metric drifts, even draft the follow-up email before you remember the call ended. It feels like the software is “thinking”, but it’s not. It feels like agents create themselves, but they don’t. Here’s a very quick high level of the need-to-knows.
It’s still rule based automation.
Large language models still don’t have objectives of their own. They look at what they know of you, consider what you asked and look for patterns and respond to triggers or data inputs. We’ve been doing this for years - it’s basic “if this, then do that” rule based automation. It’s not consciousness, it’s the same concept that was previously only available to coders and techies, which anyone can now access by “talking” to the AI agent builder. Get used to seeing and optimizing things like the below (Lindy.ai), and talking in natural language to define what each step actually means:

Prompting needs to be combined with workflows, and don’t rely on someone else in your team to learn how to do it.
Based on the first point, successful users aren’t only the ones who write the cleverest prompt; they’re both the ones who are excellent at prompting, as well as being able to structure basic rule based automations and audit the steps an agent takes - see my previous post on this. Sort of like an air-traffic controller acts as a permanent human in the loop, this is a key emerging human skill.
This means you should get good at structuring basic automation workflows in tools like Lindy or N8N (or now ChatGPT), as well as getting great at prompting in a way that helps AI models learn. That’s a way forward to choreographing agents and also understanding where they get stuck - by specifying hand-offs, and ensure a human takes control when things go wrong, look like they’re going wrong, or your customer needs assuring that your output is verifiable.
Garbage in = garbage out.
We’ve seen more than enough evidence that AI trained on nothing defensible will create outputs that aren’t trusted even if they look alright on the surface. Working with customers and businesses and speaking with investors, I can already see that investors and customers are getting pretty savvy as to what a decent AI product looks like, as opposed to a buzzword on a deck. That’s generally good for weeding out businesses that don’t walk the talk. So a huge factor in orchestrating an agent is what data you feed it, and it’s why AI still isn’t good with CRMs or with Excel spreadsheets. The issue with the output is, a lot of the time, a problem with the input.
This makes your job (on top of prompting and automation logic) more about wiring pipes securely and deciding what the agent can see and change. And as mentioned above, don’t be the person that lets someone else build or interrogate the automations, as one day you’ll find yourself out of the loop and unemployed.
Trusting an AI output is becoming about being able to back step through it’s logic.
Every proactive agentic output should ideally come with an audit trail: what rule was triggered, the data that was used and the confidence score. Ultimately in a world where agents can hallucinate and the internet is becoming a more user generated place (as opposed to coming from a big ‘trusted’ publisher, it’s likely that the data that feeds AI from the open internet contains weird things. If you can’t see any semblance of an audit trail, you’re subscribing to black box algorithms and the content summaries you ask for might start to make decisions for you, which isn’t a good place to end up.
Start small.
Here’s a super quick one you can do: automate a repetitive Monday-morning report with a prompt like the below, and start to observe what you get. Then go back in the next day and tweak your prompt, tighten it, and check what the sources are of the content being fed to you. It’s a good way of getting accustomed to aggregating information proactively, and is a good example of a quick proactive AI output that just might have you making better decisions faster.
The input:

The output:

The above example is not necessarily as good as your favourite email newsletter. Yet. Trusted writers, deep dives and expertise is still best coming from experts. But I’d suggest that anyone that creating email newsletters that just summarize existing information will need to add more actionable value to stay ahead, and that anyone can use the above to start filling in the gaps of their knowledge to start with.
To summarize
These updates are not about AI taking a step towards replacing us. The latest updates simply piece together multiple concepts that have existed for years. It’s still waiting for marching orders and orchestration. But, you can see some clear clues about how knowledge work will evolve, in that the future looks like a world where knowledge is a free of charge, level playing field, but you decide how you receive information and in what format so you can take the next best action.
Treat this evolution like dealing with your talented but literal interns; give them triggers, context, and clear exit criteria.
Master orchestrating basic prompts and automations for now, and you’ll set yourself up pretty well, instead of cleaning up after rogue LLMs. Do that now before we can see the next round of human capabilities that big AI LLMs start to steamroller.
I am creating a forum to share build journeys in public; subscribe above if you want to be alerted when I make that available!