You need content infrastructure to fight AI

This user generated artificial content can quickly travel anywhere in the content supply chain.

Assume that an organization does not support or integrate AI for any use case. They design their products, operations, and the support technology across the functions, without AI capabilities.

It does not necessarily mean that they are safe from AI.

The challenges could be for the compliance, Intellectual Property and copyrights, or to keep the content architecture clean and safe. Merely defining the AI policy does not help—your content infrastructure should support the no-AI policy.

Take an example of an ecommerce or a logistics platform where customers can import listings. Even if the product does not support AI, the user-imported content might be designed by AI—users might have built these lists by using AI tools.

Imagine that the invoices when imported for a list of medicines’ labels—those instructions are written by AI. Or, the listing images in the PIS (Product Information System) are generated by AI. Or, the customers can directly paste the AI-generated content anywhere while using the products.

This user generated artificial content can quickly travel anywhere in the content supply chain—in marketing emails, support center tickets, webinar promotions, or even as references in the content metadata.

The organization’s content infrastructure should build such rules that can safeguard its native architecture, the supporting technology for organization-wide operations, and the positioning as well, from AI.

Take another example of a university where the students are submitting their assignments with the help of AI. Even if the university’s own systems do not support AI, it does not mean that these are AI-proof. When I saw this story by The Wall Street Journal, There’s a Good Chance Your Kid Uses AI to Cheat, it shows that such institutes need to rethink their content infrastructure to handle such situations.

For specific use cases such as to detect Ai generated images, there are certain metadata standards such as The Coalition for Content Provenance and Authenticity (C2PA) that can help you detect the graphics (see a related story). But this is a very small step and may not be sufficient at all.

I am yet to see a case study or an example or reference where an organization have successfully designed their content systems and the infrastructure to safeguard against AI content. When I find one, I will write a follow-up post to share those experiences.

Twitter
LinkedIn
Email
Facebook
Vinish Garg

Vinish Garg

I am Vinish Garg, and I work with growing product teams for their product strategy, product vision, product positioning, product onboarding and UX, and product growth. I work on products for UX and design leadership roles, product content strategy and content design, and for the brand narrative strategy. I offer training via my advanced courses for content strategists, content designers, UX Writers, content-driven UX designers, and for content and design practitioners who want to explore product and system thinking.

Interested to stay informed about my work, talks, writings, programs, or projects? See a few examples of my past newsletters—All things products, Food for designInviting for 8Knorks. You can subscribe to my emails here.

Vinish Garg is an independent consultant in product content strategy, content design leadership, and product management for growing product teams.