On June 12th Salesforce announced AI Cloud, unifying the various GPT offerings, and bundling the additional technology needed to maximise its effectiveness. This includes Data Cloud, Tableau and Mulesoft. The immediate impact will be on productivity. We’ll all gain access to a willing assistant that picks up any of the low-level tasks we want to hand off. What’s more, it never gets bored and treats all feedback as an opportunity to improve.
Salesforce as Your Trusted AI Partner
Not everyone is entirely optimistic about this sea change, however. According to Marc Benioff at the Salesforce AI Day, 73% of employees believe that generative AI introduces new security risks. And around 60% of those planning to use this technology don’t know how to keep their data secure. Significant numbers that will give any enterprise executive pause for thought – risks without mitigation don’t help CTOs sleep well at night!
Through the Einstein Trust Layer in AI Cloud, Salesforce is positioning itself as the trusted partner for your AI journey. As is often the case, Salesforce is in a unique place to achieve this. It can act as the gatekeeper for the underlying generative AI capabilities. We won’t get direct access to Salesforce or third-party APIs that we can hand our data off to. Prompts will go through Salesforce where they can be examined for bias, prompt injection attacks and other unwanted behaviour. In the same way the response returned can be pre-screened for hallucination*. Salesforce verifies sources and citations before asking users to make decisions about whether they match their requirements and corporate standards.
* AI hallucinations refer to instances when an AI generates unexpected, false results not supported by real-world data. These hallucinations can manifest as fabricated content, news, or information concerning individuals, events, or facts.
Preparing Your Organisation for the AI Wave
It’s clearly still very early days for AI Cloud, but the technology is evolving rapidly, and no company wants to be left behind. So, what can your organisation do to prepare for the coming AI wave?
1. Don’t abdicate responsibility to Salesforce
When you act based on a GPT response, or send an automatically generated communication to a customer, the responsibility for any issues that arise lies with you.
While AI tools typically won’t engage in plagiarism, they can’t guarantee that the output produced won’t be a character for character match of existing work. If that happens, your company rather than the AI will be the target for any legal comeback. If you are going to publish anything that AI has created for you, make sure to carry out your due diligence.
Educate yourself on how your confidential data will be used. If you are including CRM data to augment a request to Open AI, Salesforce has an agreement that none of that data will be retained as part of the training model. However, at the time of writing (June 2023) OpenAI APIs are only available on servers in the United States. That means that your data will be transmitted to the US and processed there, even if it isn’t retained. And this could mean that you inadvertently breach GDPR or similar regulations. View the Einstein Trust Layer like Einstein GPT itself – as a valuable assistant that requires human oversight.
2. Clean the data
As always with new technology, quality in means quality out. For example, if you want to use generative AI to triage service cases, you need to make sure that the quality of your existing case data is top notch.
If there’s little to no detail, or incorrect information (ideas that turned out wrong, but the case wasn’t updated to reflect that), it’s not going to do a very good job at helping the customer. This is also an excellent time to be sweeping your data for PII (Personal Identifiable Information) that exists where it shouldn’t – in case notes or opportunity descriptions, for example.
3. Train your staff
Most important is training your staff to write good prompts, so that they can generate the best possible results.
This might sound simple – it’s just a conversation after all. But it’s very easy to let your bias sneak in by dropping the odd hint to guide the model to give the answer you’ve already decided you want. This ZDNet article provides a great introduction with some useful examples. Secondly, you need to train your staff to recognise what good looks like. If they are going to be the arbiter of content that goes out to customers, they need to know what the gold standard is. Help them out by providing guidelines around what any customer communication should include or avoid. Finally, train them on AI Cloud, starting with the Einstein GPT Quest. At time of writing there isn’t a huge amount of Trailhead content in place. But you can bet that it is being created at a rapid pace – possibly by AI!
ChatGPT reached 100 million users around 2 months after its launch. This makes it the fastest growing user base ever, according to a UBS study. It’s coming, and your company will almost certainly end up using it whether you know about it or not. So, getting ahead will set you up far better for success than scrambling to regain control.