Assume that an organization does not support or integrate AI for any use case. They design their products, operations, and the support technology across the functions, without AI capabilities.
It does not necessarily mean that they are safe from AI.
The challenges could be for the compliance, Intellectual Property and copyrights, or to keep the content architecture clean and safe. Merely defining the AI policy does not help—your content infrastructure should support the no-AI policy.
Take an example of an ecommerce or a logistics platform where customers can import listings. Even if the product does not support AI, the user-imported content might be designed by AI—users might have built these lists by using AI tools.
Imagine that the invoices when imported for a list of medicines’ labels—those instructions are written by AI. Or, the listing images in the PIS (Product Information System) are generated by AI. Or, the customers can directly paste the AI-generated content anywhere while using the products.
This user generated artificial content can quickly travel anywhere in the content supply chain—in marketing emails, support center tickets, webinar promotions, or even as references in the content metadata.
The organization’s content infrastructure should build such rules that can safeguard its native architecture, the supporting technology for organization-wide operations, and the positioning as well, from AI.
Take another example of a university where the students are submitting their assignments with the help of AI. Even if the university’s own systems do not support AI, it does not mean that these are AI-proof. When I saw this story by The Wall Street Journal, There’s a Good Chance Your Kid Uses AI to Cheat, it shows that such institutes need to rethink their content infrastructure to handle such situations.
For specific use cases such as to detect Ai generated images, there are certain metadata standards such as The Coalition for Content Provenance and Authenticity (C2PA) that can help you detect the graphics (see a related story). But this is a very small step and may not be sufficient at all.
I am yet to see a case study or an example or reference where an organization have successfully designed their content systems and the infrastructure to safeguard against AI content. When I find one, I will write a follow-up post to share those experiences.