Ethics: The study of what is right and wrong and how we should treat other people.
Ethical: Doing what is right and good according to rules about how people should behave.
Ethics and AI should be a very constant topic right now. AI can now write your LinkedIn posts, draft your emails, write your heartfelt apology to your spouse. For overloaded people, the temptation to hit "generate" and take the result and spread it around is nearly unavoidable. I’m watching people let AI do their thinking for them, and relish in losing both their individual voices and their humanity.
I've been watching this unfold across every platform, every industry. Less on substack, but it is creeping in. People cutting and pasting wisdom of the AI.
When I look at these posts, they aren’t “plagiarism” they are something even lazier. I'm calling this "AIgarism." (that Ai-garism, not an L) not outright copying someone else's work, but passing off AI-generated content as their own original thinking. These content-pollution-generators create a quick 2 minute prompt, get a confirmation-bias-laden result, and then bask in the glory.
AI is an Idiot, but it isn’t the Problem
The real problem isn't that AI exists. It's that we're predisposed to abuse it. We want so badly to not be overloaded, we look at AI like the savior that will deliver us from overwork. It’s a fairly nice word calculator, but it is by no means an employee or replacement for actual thinking.
When I’m writing or creating with others, I use AI tools as part of the process. Just like I use a word processor or a spreadsheet. But just like I’d always check the math of a spreadsheet (human), I am amply involved in what I produce otherwise. This paragraph was originally organized by my AI agents, but I’ve rewritten nearly every word.
To do my work, I work by what we’re calling a "Balanced Work" approach to AI collaboration. It's built on a simple premise: humans stay in the driver's seat, AI handles the minutia, and we are always second guessing the results we are given.
In every post, I am trying to provide a mini-system. A way to look at work. This is an idea, a model…
Jim Benson’s Ethical Balanced AI Work Flow
Step 1: Ideate as and with People (Creative)
I muse, I mull. I converse. Real ideas come from human conversation, from wrestling with problems alongside other people who see the world differently than you do. From taking time to combine what you are thinking with what’s happened over your life. So I start here. Individual and collaborative. Come up with ideas.
Step 2: AI Organizes and Compares Information
Once messy, beautiful human ideas are ready, I expose them to AI. The AI then sorts, categorizes, and structures information. The AI then is subjected to a series of questions like, “What else has been written about this?” “What criticisms are there about ideas like this?” My AI is my mediocre research assistant, giving me a mediocre product that I then need to interpret and edit. The AI never gives me the “right” answers. It gives me acceptable organization.
Step 3: Human Selection and Filtering
I am very interested in making my points clear, fully understanding my own words, and owning the product. Experience, expertise, and worldview are inputs for the actual product. We review the AI's recommendations and choose what fits our goals, our values, our vision. There is serious second-guessing of the AI recommendations here. (You wouldn’t believe how many times AI agents will quote me to me with things I never said).
Step 4: Human Refinement and Direction
I then take the selected ideas and shape them with human expertise. I look at the original thesis, I see how working with the AI has provided me with additional detail, and I write an article based on this new information.
Step 5: AI Processing
The AI processes the refined direction, with prompts specifically designed to critique the product, build the ideas deeper, and give me other helpful sources.
Recursive: Steps 2 to 5 repeat while creating the product.
Step 6: Human Voice and Authenticity
In this last step I rewrite the piece entirely in my own voice. I am writing a Jim Benson / Modus article or course or whatever. The final words need to be human to have human impact that last longer than clicking “like.”
Don’t Cut and Paste People
This has been a process that allowed me to question my assumptions and use AI in an ethical way. When used with a group, I find this:
Eliminates ego-driven perfectionism. Writing as a group often takes forever. With the AI, there’s this “processing” element that I find helps no single person getting attached to "their" idea. The work becomes about the best outcome. Weirdly, the impersonal nature of AI can help depersonalize work in a healthy way.
Maintains human ownership. Even though we are using AI in the mix, humans are making every significant decision. People are simply using a tool to amplify capabilities and provide additional information that may be unknown to them.
Preserves authenticity. It’s still your hypothesis, your learning, your voice. Step 6, where we rewrite one last time, creates (because you are actually creating it) real information from your real head and heart and typed with your real hands.
Come join the conversation
If you are a leader or a team member that wants to confront the toxicity in the room, please check these courses out and join the conversation.
Cleaning Toxic Waste Is a toolkit and a community for restoring health at work.
WIP Whisperer helps you visualize your overload, manage it more humanely, and rediscover pride in what you do.