Technology builds the walls, but people turn the key—or forget to do so. Christopher Fielder, Field CTO at Arctic Wolf, calls for a cultural shift: from one-off security training to continuous education, and from blind trust in AI to a new collaborative model between humans and machines.
Reused passwords, ignored security protocols: it happens on the shop floor, but just as often in the boardroom. Fielder observes a concerning trend among C-level management.
“We see an unfortunate trend of ‘do as I say, not as I do’,” Fielder notes. “C-levels are often the primary targets. With spear phishing, you don’t go after everyone—you target that high-value user.”
Leadership as a target
A concrete danger is business email compromise. Fielder outlines the scenario: if an attacker gains access to the CFO’s account, they can send an email from that account to the finance team requesting a money transfer to an account owned by the attacker. Because it looks like a legitimate message, it goes unquestioned.
“We need to tell those individuals with privileged accounts: you must lead by example. You must set the standard for how security is implemented,” Fielder emphasizes.
“It’s not just about holding people accountable. It’s about preventing them from taking those bad actions in the first place.”
Christopher Fielder, Field CTO at Arctic Wolf
The figures are sobering. Arctic Wolf conducted research into human error and weaknesses in cybersecurity. “What we discovered was that a large percentage of leadership had clicked on phishing links. I believe it was in the eighty percent range. But of those who clicked, the vast majority said they were very confident that their environment would not be compromised.”
It is a paradox: the people with the most access and the highest risk profile are the least likely to follow the rules, while being the most convinced that everything is under control.
Accountability begins with education
Building a culture of accountability is realistic, Fielder believes, but there are pitfalls. Writing policy isn’t enough; people need to understand what’s in it and why.
A current example: AI policy. Many organizations write acceptable use policies for tools like ChatGPT, but fall short in communicating them.
“Many organizations write those AI policies, but they don’t inform their employees that the policy exists. They don’t explain what the policy entails. They just say: here is the policy, read it, sign that you’ve read it, done,” Fielder describes.

The result? If someone enters confidential company data into an AI model, you might be able to hold that person accountable afterward, but the damage is already done.
“It’s not just about holding people accountable. It’s about preventing them from taking those bad actions in the first place,” says Fielder. “And remember: once you put data into AI, you’ve trained the model on it and you can’t get that data back out. Prevent it from happening.”
AI cannot function without humans
Can’t AI simply compensate for human error? Fielder is clear on this: AI can never function in a vacuum.
“AI is nothing more than a hammer. A hammer can be used to build a building or to smash a window. AI can find weaknesses, discover vulnerabilities, identify open backdoors, and detect reused passwords. But it is then usually up to the human to take the necessary action to correct those errors.”
“Once you put data into AI, you’ve trained the model on it and you can’t get that data back out.”
Christopher Fielder, Field CTO at Arctic Wolf
Fielder also points to the inherent limitations of AI: the model is only as good as the data it was trained on. He illustrates this with a striking example.
“Suppose I show a new AI model a series of photos of cats. All with four legs, all gray or brown or white. And then I show it an orange cat with three legs. The model doesn’t know what it is because it hasn’t seen that in the training data.”
The same limitation applies to cybersecurity: if a new type of attack emerges that isn’t in the training data, AI cannot recognize it. At that point, it needs a human analyst to say: this is unusual, this isn’t right.
From human-in-the-loop to human-on-the-loop
Arctic Wolf has now adopted a new concept that goes beyond the well-known “human-in-the-loop” principle. Fielder calls it “human-on-the-loop”: people who are continuously involved in all aspects of AI, not just as a single cog in the machine, but as a partner.
“We are moving beyond the concept of human-in-the-loop to human-on-the-loop. That means humans are continuously involved in all aspects of AI, rather than just being one piece of the puzzle. It’s about collaborating with AI at all levels of the environment.”
This is particularly relevant now that agentic AI is emerging: AI that doesn’t just generate an answer, but also makes a recommendation and then takes action itself.

“With agentic AI, the system can say: I saw something suspicious, the attack was moving fast, I need to react quickly. And then it takes the response action itself. But we need humans, at the very least, to observe and say: was that action okay? Should it have done that?”
Fielder compares the human role to that of a mentor or teacher. The human trains, corrects, and adjusts—just like a manager guiding their team.
Maintaining core skills
A major danger Fielder sees is security professionals letting their core skills atrophy because they rely too heavily on AI. He explicitly warns against this.
“I always tell analysts: maintain your core skills. You need to be able to detect threats, make the right decisions on how to respond, and know the basics of your environment. Otherwise, you won’t know if AI is giving you the right answers.”
He draws a comparison to a manager who says, “I don’t need to know that, because you’re telling me.” The problem: then you also don’t know if someone is lying to you.
“Don’t think AI is just going to do your job for you. It is your responsibility to be the manager of AI.”
Christopher Fielder, Field CTO at Arctic Wolf
“Don’t think AI is just going to do your job for you. It can make some aspects easier, but it is your responsibility to be the manager of AI. The one who says: did you do that correctly? Are you off the mark? And how are we going to fix it?”
Humans and AI: Inextricably linked
The picture Fielder paints is not one of AI as a replacement, but as an enhancement. Human analysts can spot things AI misses and feed those discoveries back into the model, so the system keeps improving.
“Our hunters go through the environment looking for things that might have gone unnoticed by current detection methods. What they find, they feed back into the model: now you go look for this, and we’ll go look for something new.”
This creates a cycle of continuous improvement where human and machine complement each other. Humans don’t have to do it alone, and AI can’t do it alone. The key is collaboration.
Fielder’s message is clear: don’t just invest in technology, invest in your people. Don’t just write policies, ensure everyone understands them. Don’t trust AI blindly, but make it a partner. In the cybersecurity of tomorrow, humans haven’t become obsolete. On the contrary, humans are more important than ever.
This is an editorial article in collaboration with Arctic Wolf. Want to know more about their solutions? Head over to this page to receive the full video interview.
