Navigating AI Unknowns: How to Use Transparency to Build Customer Trust
Businesses using AI tools are asking customers to trust them using new "blackbox" technology -- in exchange they need to make clear to customer exactly how they are using Ai
Businesses using powerful AI tools like ChatGPT are making an implicit promise to customers: we will give you vastly better service, (1) faster response times, (2) everything customized just for you, (3) smarter decisions, and (4) exponentially better value for your money. And in exchange, customers, you trust us, you trust us to use these “black box” algorithms, that even we don’t understand, and, in exchange, you trust us with your personal data, with information about almost everything about you;
But for this arrangement to work, for it to be fair, for it to be legal, the businesses have to hold up their end. That means, in a word, “transparency” Businesses using AI tools must be transparent about (1) how AI is used and its limitations, (2) the risks associated with AI integration, (3) the type of data collected and how it is analyzed, and (4) the overall usage and handling of customer data, in order to maintain trust and avoid potential legal issues.
To uphold transparency and disclosure, businesses must (1) provide clear, honest explanations about their AI use and intentions and (2) keep their promises. This involves (1) offering straightforward information about AI applications and data collection, (2) establishing and communicating data privacy policies, (3) enabling customers to access, modify, or delete their personal data, and (4) continually monitoring and updating AI systems and policies to stay current with evolving technologies and regulations.
Don’t Keep AI In the Shadows
More from the LawSnap Crystal Ball desk, with a glimpse into the future
"Love Scam! AI-Powered Dating App Swindles Users, Sells Secrets, and Traps Lonely Hearts!" – Deceptive dating app hit with class action, FTC crackdown over AI deception and data misuse
NEW YORK, 2025 – Heartbreak is the name of the game for users of the popular dating app, Smart Search for Serendipitous Soulmates (“SSfSS”) which is now facing a torrent of legal trouble after being exposed for a slew of deceptive practices.
SSfSS, promised its customers that its Super Smart AI, dubbed Myrtle the Matchmaker, could use algorithms to help them find love. But now they accused of lying to users, selling their sensitive data, and secretly employing AI chatbots to impersonate real people.
SSfSS claimed that it was collecting personal information to help match users with the perfect partners, but it turns out they were selling the data to third parties. As if that wasn't enough, SSfSS duped users into thinking they were chatting with real people, matched by super computer algorithms. But in reality, customers were talking with “deceptively lifelike” AI chatbots. The company collected extensive information about each user to create customized and convincing chatbots that mimicked genuine human interaction.
Now, the company is facing a storm of legal trouble. A massive class action has been filed, accusing the company of negligence, fraud, and violating user privacy rights. In addition to the class action, the Federal Trade Commission (FTC) has stepped in to crack down on SSfSS for its deceptive practices and misuse of user data, and lack of transparency.
Only Be Transparent
We are, of course, in the very early days of widespread, public facing AI tools, and things are evolving fast. Polls show the public is ambivalent about AI: impressed by its power to give faster, more accurate, more personalized service, but also nervous about snooping wary of decisions being made by robots behind closed doors.
Here are some questions to keep in mind:
Be Clear about what AI is doing: Make sure that any communication generated by LLMs is clearly identified as coming from an AI system. This will help prevent confusion and potential legal issues that may arise if users believe they are interacting with a human when they are not.
User consent and privacy: Before implementing LLMs, make sure to establish a robust process for obtaining user consent to interact with AI systems and ensure that users understand how their data is being collected, processed, and used by the AI tool. This includes being transparent about data storage, retention policies, and sharing practices.
Compliance with emerging AI regulations: As AI continues to evolve, new regulations and guidelines are likely to be introduced. Every company should stay informed about any relevant AI-specific laws or industry best practices and ensure her business complies with them.
Transparency in AI decision-making process: While it might not be feasible to fully disclose the inner workings of the AI, it's important for each company to provide users with a general understanding of how the LLMs make decisions or generate content. This helps users trust the AI system and make informed decisions about whether to rely on the AI-generated content.
Disclosure of AI limitations and potential inaccuracies: To maintain transparency, every company should clearly communicate to users the potential limitations and inaccuracies of AI-generated content. This may include providing information on the AI's training data, the scope of the AI's capabilities, and any known issues that may affect the accuracy or reliability of the content.
Be transparent about what data you are collecting. What are you using the data for? Who can see the data?