Recent advances in artificial intelligence (AI) and the proliferation of generative AI tools have the potential to fundamentally change the future of work. According to McKinsey, AI adoption has more than doubled since 2017, with investments in AI following a similar upward trend. Despite this, many industries and organizations have raised concerns about the potential misuse of the personal information and data that generative AI tools rely on to learn and make predictions for their models. To address these concerns, companies must establish processes to protect data, ensure data security, and promote fairness and inclusion in their use of generative AI.
At Novata, ensuring data security and the responsible use of emerging technologies are core aspects of our commitment to sustainability. Beth Meyer, Chief Legal Officer at Novata, led the development of Novata’s Responsible Generative AI Policy to guide the use of generative AI tools. In this Q&A, Beth shares insights from Novata’s process and considerations for investors and private companies navigating this nascent industry.
Q&A with Novata’s Chief Legal Officer
Why is it important for an organization like Novata to have a policy around the responsible use of generative AI?
Generative AI can be an incredible asset for teams. There’s so much innovation and value it can provide, and it is a great tool for efficiently distilling insights or drawing out key findings from large bodies of information. But there’s also a tremendous amount of risk to any company and its customers if that company’s employees do not use generative AI responsibly. We wanted to get in front of this risk at Novata. If there isn’t a policy regarding something you know your employees will use, then you’re not empowering them to handle the risk they’re likely to face.
How did you approach developing a responsible AI policy at Novata?
This was a group effort with participation from several members of our executive team. The Legal team led the charge, along with critical involvement from our Chief Information Officer, Chief Product Officer, and our Chief Operating Officer. It was a fantastic balance in that we had individuals in the group who were hardcore advocates for generative AI and others concerned about reigning in its use. We wouldn’t have struck the right balance without all those voices — you want to be able to take advantage of the benefits without overlooking the costs.
What were some key considerations when developing Novata’s policy?
We really tried to make it digestible and easy to implement for anyone at the company. We went through potential risk scenarios to identify the key principles that resulted in the most risks and focused our guidelines around those principles. We also decided early on that it was important not to formally prohibit the use of generative AI. Our view was that prohibiting its use would not ultimately prevent all the risks we were worried about but instead create a shadow environment where people use it without disclosure. In other words, we know people are going to use it, so we can tell them ‘no’ and look the other way, or we accept that reality and give them rules that are clear and easy to follow, as well as educate them about why the rules are important. This route leads to better adoption of people doing it safely and coming to us with their use cases, which gives us a much better opportunity to address risks before they emerge.
Novata’s Responsible Generative AI policy consists of several key principles. Our first principle is to be smart about your use of AI — know what you’re doing and be thoughtful about how you use generative AI. Secondly, do not put confidential information in generative AI tools. This was our top risk when we went through the different risk scenarios. Thirdly, take responsibility for the output from generative AI tools and make sure it isn’t lifted from someone else’s work. There should be proper disclosure or a disclaimer that what you’re putting out into the world has been created by generative AI or relied upon heavily in its creation. Finally, be honest and come forward with concerns about its use. It was important to stress this disclosure process so employees feel empowered and supported in using these tools. We plan to update the policy annually at an absolute minimum, and I’d advise organizations to at least look through their policy quarterly. The industry is changing quickly, and if you want your policy to make sense for your internal stakeholders, looking at it more frequently is helpful.
Industries have varying sensitivities around the data they collect and store. What is “responsible” with regard to data collection?
It goes back to creating a safe environment where people can disclose how they use generative AI and any questions about the risks versus rewards. Regarding our second principle around confidential information, a team member asked me, “Well, what’s confidential?” It seems straightforward, but it means different things in different circumstances. For Novata, we define it as anything we would not know or have access to without a customer providing it to us directly or that we would not feel comfortable sharing publicly. This is a great starting point for any company. Think about the expectations you have given those who have entrusted you with their data, legally in your licenses and agreements, but also from how you’ve presented yourself. That should drive your definition of confidentiality and, in turn, your degree of responsibility. The more personal the data you handle, the higher your responsibility.
What role should investors play in ensuring portfolio companies develop reasonable guidelines around data privacy, ethics, and responsibility in the use of AI?
Investors have arguably the most impactful role, perhaps more than regulators. Legislation is not set in stone yet, and it’s hard to implement with many regulatory cycles and public comments. Investors have so much power over the behavior of the companies they invest in or those seeking their investment. ESG is a phenomenal analogy. There’s regulatory pressure in the EU, UK, and other places around ESG, although not as much yet in the US. However, there is still significant adoption of ESG across private and public markets in the US as a result of investors’ demands and expectations. The role of the investor is critical, and they should be asking questions and educating themselves — you can’t ask knowledgeable questions about the responsible use of generative AI if you don’t understand the risks yourself.
For those investing in the companies creating AI models, one critical question is around the diversity of the team. That is a very straightforward way to assess if the AI that company is building is likely to result in responsible use or have ethically questionable effects. If you only have people on the team from the same demographic, the likelihood of implicit bias being built into the AI models they’re creating is high, as are the chances of that resulting in discrimination. Look for diverse teams and good governance structures to implement policies. Some of the table-stakes ESG questions investors are already asking portfolio companies could apply to responsible AI, but I would highly recommend responsible AI is its own item on the agenda for diligence and portfolio monitoring. It may seem like a nice-to-have right now, but we’re going to look back and see it as a must-have. The likelihood of potential litigation or reputational harm from information leaks, data breaches, or security problems is high if generative AI is misused.
What advice do you have for investors or companies developing such policies and looking for buy-in internally?
Educating everybody in a very open-minded manner is key. If you bang the drum on the risks while ignoring the innovation potential, no one will listen. Approach the use of AI as a novel opportunity that has upsides and downsides. A key point is to not assume that everyone knows the risks, which is something we learned in our journey. Discuss the potential risk scenarios so people understand what you’re trying to protect against, and make sure employees are aware of and trained on your policy. Also, this is going to sound obvious, but don’t ask a generative AI tool to write your responsible AI policy or suggest a list of risks to you.
It’s incredibly pertinent to consider the legal and regulatory landscape as well. Law firms are an amazing source of information for this. Getting quality guidance and counsel to stay abreast of changes is really important, as is looking ahead at broad trends. Once regulations are put in place, it’s harder to shift into compliance when the industry has been running afoul. The industry is moving at lightning speed – just last month, the Biden-Harris Administration issued an Executive Order outlining guardrails around the safe and trustworthy development of AI. Paying attention and setting sensible rules for responsible use will help you, no matter where regulations fall.
What are you most excited about in AI and ESG?
The amount of good it can create is unfathomable, and that excites me the most. Using AI to create novel solutions to climate change impacts or difficult societal problems we have been in a rut to fix for a really long time — an incredible amount of good can come from that. There’s a real risk of it eliminating human jobs, but nothing will be able to replace genuine human creativity. In a very odd way, AI is going to force us to confront our own uniquely human strengths. I don’t know what that looks like yet, but I’m excited.