Putting Trust at the Core
Going forward, it is likely that the landscape of consumer and regulator expectation will continue to change. Companies may start collecting more first-party data directly from their consumers, while reducing reliance on third-party data. As such, the ethical and responsible usage of data will become an even more pertinent issue in the future.
Expanding on the issue of privacy, Facebook’s Neary emphasises that companies, including SMEs, must provide consumers with transparency and empower them by giving control over their data. “Privacy protections for people and personalised experiences do not have to be at odds,” Neary says. He adds that the company is developing technologies that can minimise the amount of personal information processed. This allows Facebook to enhance privacy for consumers, while retaining the ability to show relevant ads and measure ad effectiveness.
Lazada uses collaboration to maintain trust across its ecosystem on the responsible use of data. “We conduct consumer research to understand consumers’ privacy concerns and act based on the findings,” explains Lazada’s Ekbom. The company also actively works with regulators and pursues industry standard certificates on the ethical use of data to strengthen trust. “Data protection and privacy act as a key trust factor to drive business,” says Ekbom.
As technologies continue to evolve and more regulations start emerging, organisations that use AI tools and big data must also ensure AI literacy in its staff, says Salesforce’s Wiegmann. This way, they'll understand how they can best collect data and leverage AI tools in a responsible and transparent manner, he adds.
With the goal of ethical and responsible use of AI in mind, Salesforce has also established its own Office of Ethical and Humane Use of Technology. The purpose of the office is to create a framework that guides the ethical use of technology, including AI, in the organisation.
It’s a move that Temasek’s Zeller recommends. Depending on the sensitivity of use-cases, balancing privacy and personalisation can be a “tight rope walk.” “Companies must acknowledge this and act responsibly,” he adds. One of the first steps that companies can do to is to translate their core values into an AI & Data Ethics and Governance framework, advises Zeller. Such a framework should focus on key principles like non-maleficence, striking a balance between automation and oversight by human decision makers, fairness, and explicability.
After all, “AI ethics and governance forms the foundation for trusted personalised consumer experiences,” he says. Maintaining trust is essential for consumers to continue sharing access to their data, driving more innovative apps that will be more useful for consumers and businesses in the future.