Contact

AI

Aug 05, 2023

5 Values for Ethical Use of Generative AI

Keir Bowden

Keir Bowden

5 Values for Ethical Use of Generative AI

With the general availability of Sales and Service GPT, and the Einstein GPT Trust Layer, end users can now get their hands on Generative Artificial Intelligence tools directly from Salesforce. This is an excellent time to consider how you will apply these tools in your business and to ensure that you are using them ethically.

To help with this, here are five core ethical values that you need to keep front of mind: 

1

Transparency

Wherever you plug generative AI into your business process, ensure that everybody knows about it. A chatbot, for example, should introduce itself as AI-powered, and give the user a mechanism to provide feedback once the interaction is complete. Something as simple as, “How did this chat go?” gives the human on the other end of the interaction a route to raise concerns or praise the quality of the chat.

It’s also important to be transparent about the data you are capturing to train AI models: why you need it, how you will use it, how it will be protected, and, most importantly, how a subject can choose to opt out.    Finally, be clear about the human oversight that is involved. Did a human approve the generated content before it was used in an interaction? Or was it a case of validating in advance and signing off that the content was generally acceptable? Shine a light on the decisions made.

2

Trust

There are a number of aspects to using AI in a trusted fashion, starting with the Einstein GPT Trust Model, which provides guardrails to ensure your users aren’t leaking sensitive customer or company data when making requests to third party systems, and to ensure the response provided isn’t harmful or inaccurate.

But these are just the basics – you need to be sure that the response is accurate too. You can do that by asking the model to provide details of how it arrived at that response. The steps taken, information considered, citing any references used. If the model can’t provide this information, you need an alternative mechanism to verify the response (manually if necessary). An AI-powered solution needs to be at least as accurate, consistent and reliable as the system it is replacing. In the case of self-driving cars, for example, you don’t want it reacting differently every time it identifies a pedestrian.

3

Fairness

This value is a little more difficult to quantify. The definition of fairness will be different from company to company, and indeed, from individual to individual. One thing we can all agree on is that bias in the training data is a problem, and continual monitoring is required to keep it from creeping back in.

One of the better ways to avoid bias is to have as large and diverse a team as possible to check the data. The more representative of your target population, the better. Bias present in the training data tends to be amplified by generative AI. It’s because it relies heavily on spotting patterns in the data and building rules around those.

4

Security

We know that customers care about how we handle their data. Every survey since the dawn of time has confirmed this! And regulations like GDPR ensure that adequate protections are in place. Like any other data asset, AI models need to be protected from cyber attacks.

Given the current gold rush around AI tools, and how much control is being handed over to the models, they need additional attention.  For example, as well as ensuring the training data and model aren’t compromised and leaked elsewhere, it’s also important to ensure the model isn’t poisoned with data that is intentionally biased to benefit an attacker.  For a detailed discussion on AI risks and mitigation, see the Artificial Intelligence Risk & Governance paper from the Artificial Intelligence/Machine Learning Risk & Security Working Group (AIRS) of the Wharton School.

5

Accountability

Accountability isn’t just about where the buck stops in the event of a problem. Everyone involved in the creation and deployment of AI in your business is accountable for the impact on users. Everyone from the developers through to the legal team. You need to ensure you are auditing decisions and results on a regular basis – test, review, adjust, then test again.

The buck does have to stop somewhere though, so you need to define who is ultimately accountable. And then provide a clear line of communication for issues and concerns to be raised. Make sure that you are aware of regulations and adhering to them. This is important at these early stages, as while regulation is lagging technology (and probably will for a while), it’s not standing still.

Above all, make sure that the way you are using generative AI is improving everyone’s experience – stakeholders, users, customers and partners.


Want to learn more about prompt engineering and its role in effectively and ethically using Generative AI? Join me, Keir Bowden, for a live webinar on August 23rd. And gain a clear understanding of how to leverage prompts to enhance your AI interactions!

    Conversation Icon

    Contact Us

    Ready to achieve your vision? We're here to help.

    We'd love to start a conversation. Fill out the form and we'll connect you with the right person.

    Searching for a new career?

    View job openings