Are we atrophying our ability to communicate?

15   |   By Achim Harzheim   |   Last updated: 2026-05-10   |   View Timeline

I am (and I'm sure most people reading this are) using AI tools to help write applications and formal texts regularly. That in itself is not an issue, using AI to automate the annoying part is what the technology is all about, but do we know where to draw the line? As I am reading through research proposals, cover letters and especially scrolling through LinkedIn these days I often zone out and realise: this all sounds the same. Yes, it is polished writing but there seems to be no depth to it and recognizing classic AI writing patterns actively gets me annoyed. At least for me that actually diminishes and makes it more difficult to process the ideas expressed and information contained in the writing. What I'm observing is something we already know from research, AI is pushing users to more standardized writing1. While I sometimes wonder what that will do to literature, the increasing conformity of formal writing is not my main concern.

What I am worried about is the slippery slope from using AI to draft formal texts to using it to help us draft and polish our more regular written communication. Currently most users probably use AI to help draft emails rather than text messages but as LLMs get more integrated into other tools of daily communications they will be used by more people for that purpose. WhatsApp has recently rolled out its Writing Assistant that can draft messages based on a conversation directly in the app and other messaging tools either have integrated or are planning to integrate similar capabilities. What will this increasingly new normal of consulting an AI before reaching out or respond do to our ability to communicate, relate and empathize with each other?

The first question is, will we even be using it? Human beings tend to be inherently lazy and, especially when with an increasing demand of our time in modern society, usually take the easier and faster route to accomplishing a goal. While we won't do this for every conversation and certainly not for every one of our contacts the temptation will certainly be there. Instead of refamiliarizing ourselves with a chat conversation, potentially listening to voice notes verging on podcasts, and going through the labour of thinking about how we can adequately address everything said and express our own feelings adequately, we can opt for the quick and easy way out of asking an AI how to respond or phrase something. Especially as AI models get better at understanding context and drafting responses the calculation will also move from "Can I do this quicker using AI?" to "Can I actually have a better response using AI?". Most of us are already often uncertain about communicating and practicing less will not help. Over time a vicious cycle can emerge, as we use these tools more often we will get worse at communication and have less patience for it, which will make us more likely to use these tools. And for conversations we have previously used an AI tool, the threshold to going back will be higher as we'd have to not only catch up on what they've messaged us but also make sure our responses are in accordance with our preceding AI communication. As we get out of practice of communication the actual and perceived workload of actively engaging in it will increase and we will be more likely to resort to using AI for it. As the use of AI models gets more normalised it is also not difficult to see how its use can also start leaking into communication in relationships we currently would categorically rule out from ever using AI in them.

Now maybe one can argue that if we have friendships with people that are not important enough for us to make the effort and where we would be willing to relegate communication to AI it is ok to outsource them? First, I don't believe such a clear line can be drawn and as argued above will always be blurry and a moving target. But additionally, this is exactly the sort of emotional work that we should still be doing. There can be enduring spells in a friendship that might feel one-sided or even realizing that the relationship has run its course and both participants are now different people from what they were when the relationship was initiated. Should we outsource the work of keeping or ending a friendship? And is a situation where one person is putting in the work but unwittingly talking to an AI or even a situation where two AIs are talking to each other really a desirable outcome?

To me there is a cautionary tale to how our ability to communicate and the use of AI can play out and that is our ability to do arithmetic in our head and the use of calculators. Nowadays, most of us won't (and often probably can't) readily perform multiplications or divisions in our heads or, god forbid, written long division. That hasn't really been an issue, we don't often need it in a situation where we don't have a smartphone that can do it for us but it shows how an ability that we all had or can learn easily can get lost. A skill such as communication is much more difficult to discount as anachronistic and not relevant anymore and at least for now we don't have the technological crutch for communication everywhere. Whether that will change with the advent of smart glasses and what that will do is a whole different story.

One solution is that we try to move more communication to the real world where, at least for now, we can't easily use AI to help us and I do believe that is one way to hone our communication skills. But there are some troubling trends. There is some evidence that LLMs influence the way we communicate even in the real world, with words that are preferred by LLMs increasing in use in podcasts and Youtube videos after ChatGPTs release2, though that might be due to scripts being written with LLMs. More importantly though, we are increasingly moving to written rather than spoken communication, and it is estimated that we speak almost 30% less now than we did 2 decades ago3. This is a pattern we can also see in our daily life even without looking at available research, people don't want to be called and we often text even when we are in the same vicinity just because it is easier.

But ok, let's take an optimistic approach here, most of us will probably get worse at communicating, yes, but we won't fully unlearn what we've grown up our whole life training and we still spend time outside and in person. Well, imagine you are just now learning how to communicate with your peers and finding your voice and you spend a much larger amount of time online? This is exactly the situation that many adolescents find themselves in.

When a lot of our communication is done online we have way fewer cues on what our counterpart is thinking as we don't see facial expression and body language or hear the tone of voice. While the asynchronous nature of online communication gives you more time to think through your answers you don't know what the effect of your message is until you get a reply. We all know the anxious waiting game when we are expecting an answer to something we sent and have no idea how it was perceived or even imagine worst case scenarios how it was interpreted. Talking to people can be rather nerve-wracking, we don't know what they are thinking and of course we want to be liked, it is in the human nature. It takes time growing up to understand who we are and how we communicate, find the people that we can vibe with and be confident enough to accept that some people will disagree with you and not like you.

When you are at the beginning of this journey and want to be liked it is incredibly tempting to double check and safeguard your writing with AI models that can polish what we wanted to say and make sure there is no faux pas. If I imagine myself as a teenager who was anxious to fit in and wanted to be liked by peers, I probably would have checked my messages before sending them. One of the criticisms about AI writing, that it all sounds the same and is pretty uniform, is actually the main draw here. Most teenagers don't want to stick out and want to fit in with their peers so sanding down their communication seems like a good idea. We don't know exactly what the result of this will be, it is very early days, but I imagine that relying on these crutches too much, especially in our formative years, will make it more difficult to relate to others and learn how to communicate. And you can easily imagine the same vicious cycle we discussed above repeating here, as teenagers get more used to communicating with AI they'll not want to be without those tools and communicate more online with AI, which makes them worse at communicating without AI etc.

But we can't just pretend AI doesn't exist and I don't want to argue against using AI in general or against it being an amazing technology. Most likely that won't be an option if you want to succeed anyways. But when it feels like we are on the verge of another large scale experiment about the effect of technology on society in general and especially the next generation, similar to what we have just done with social media, it does give me pause. Do we need AI support for informal written communication or are we automating away what makes us human and helps us life in a society?

Let's hope that either consumers choose not to use these tools, regulators intervene or even the companies themselves realise that this might not be a good idea. Though if we think about the evolution of social media and how a lot of the current dynamics seem to play out similar, all of these interventions seem somewhat unlikely. So what should we do to prevent some of the worst effects on our ability to communicate? This might be a good time to mention that I'm a trained physicist and not a psychologist so I'm not really qualified, some might even say actively disqualified, to give you advice. But you've read to this point so I guess you are either bored enough or trust me enough to hear my take.

Personally, I like to believe it helps to engage with the world in environments and ways where the use of AI is looked down upon or actively discouraged. So that means for one have more spoken conversations. Opt for regular phone calls with your friends and family rather than just texting or even better talk to your friends and especially strangers in person, even or particularly when it is initially panic inducing and uncomfortable. Read books and texts not written by AI and importantly keep trying to write texts and communication yourself when time and situations allow for it. Engage with the written world in ways that you ordinarily wouldn't, join a book club or go on a writing retreat even though you are not a typical writer. If you need feedback on your writing, ask someone you know for help with editing and polishing your writing rather than an AI model and if you have time just write a few drafts and you might be surprised how iteration even with yourself can improve your writing. I don't want to advocate against AI use everywhere and there are many situations where not using it will put you at a disadvantage. For example, at this point it might not be a good idea to write your CV and cover letter yourself because job applications tend to be pre-sorted by AI and it has been shown that AI prefers AI4. But as a good rule of thumb, when you do use these tools, do it in a mindful fashion, aware of what you are outsourcing to a machine and what might be lost in the process.

One last obvious question I'm sure you have now, after talking how AI makes everything sound the same, did I use AI to help me write this piece? Now that would have been incredibly hypocritical of me and while I used Claude and Gemini for helping me research the topic this text and all parts of it have been untouched by AI. That is even though I felt the pain of trying to go from a first draft of discombobulated thoughts to a cohesive piece quite starkly and, after often using AI tools for that purpose, also my diminished ability to do it. I'll let you decide whether abstaining from AI tools was good or a bad for the quality of the piece but I guess at least the writing is uniquely mine and I believe writing it myself has helped strengthen my ability to empathize and communicate with my readers.


  1. Sourati Z. et al., "The Shrinking Landscape of Linguistic Diversity in the Age of Large Language Models.", arXiv, 2025, arXiv: 2502.11266

  2. Yakura H. et al., "Empirical evidence of Large Language Model's influence on human spoken communication.", arXiv, 2024, arXiv: 2409.01754

  3. Pfeifer, V. A., & Mehl, M. R., "Sliding Into Silence? We Are Speaking 300 Daily Words Fewer Every Year.", Perspectives on Psychological Science, 2026, 0 (0). 

  4. Xu J. et al., "AI Self-preferencing in Algorithmic Hiring: Empirical Evidence and Insights.", arXiv, 2025, arXiv: 2509.00462


← The Mind That Changes The WorldThe Return of the Renaissance Generalist →