Photo by Nick Morrison on Unsplash
Recently, I read
writing and it made me curious to apply similar questions in the context of the work of tech professionals.What happens to critical thinking in design, research, product, or engineering when AI dictates how we communicate?
How can we protect critical thinking and not be overly dependent on AI?
How can we leverage AI tools to think outside of the box?
My take is that it’s OK to experiment with the AI tools once we have a piece of writing that we want to improve.
This implies:
Already knowing what key message we want to communicate and why
Already having examples and key pieces of information that we value sharing
AI tools can then quickly generate different wants to present your argument and highlight the information you want to share.
The tricky part is if you haven’t figured out what matters and the topic is wide open. Then, you’re outsourcing the thinking that YOU should be doing.
The writing outputs that AI tools present seem so polished, so final, that it’s not easy to deconstruct them and consider the biases involved.
Are writing outputs generated by the AI accurate?
What assumptions do they make?
Do we have the mental space to create counterarguments against polished output, especially if we are too early in our thinking?
Many of you would agree that tech professionals tend to be pressed with time. We rush, rush, and rush some more.
A piece of writing – a research plan, a design rationale, a product information document, or release notes – that seems polished, complete, and correct is hard to resist. Can we still be willing and able to critically engage with it?
What’s your take? DM me, I’m curious to know.
👋🏽 I’m Mel…
I’m here to empower tech professionals by:
🙇🏻♀️ Doing career coaching (DM me to book your free discovery call)
✍🏽 Writing on Medium and Substack
🩷 Hosting webinars, making warm intros, and building community