When we use AI tools for any reason, it does not ask us to be honest. It works on the accuracy principles—accuracy and clarity in our prompts so that I can show the right information that we need.
Where do we see the role of being honest while using AI?
For example if we lie to set up a context hoping that AI might give us a better response that we need—does it help? Since we know that most of these tools do not store our conversations (unless we save it for our own future reference), it is easier to assume that tab closed means conversation deleted.
The stories of misinformation, disinformation, and fake news are well-documented in the industry and many of these have been inspired by the feeling—”It’s ok to be dishonest with AI because AI does not care,” and they are right because AI does not care at all.
Nicholas Andresen wrote this post last year—The Hidden Cost of Our Lies to AI. Nicholas says—”Our individual conversations might reset, but their aggregate shapes what we might call the cultural memory of artificial intelligence.”
Our dashboards in Airtable, Hubspot, and Linear might not caution us but every such project and campaign where we are being dishonest with AI on purpose, it comes back to the industry in some way which means it comes back to us and we may not even know about it.
If you are not honest with AI, it is like passive smoking—it harms everyone who are around us. In that sense, every day is 31 May for me—no tobacco day. Let there is be healthy AI use that serves all of us collectively.