Q&A With Dr. Anna Zeiter, Chief Privacy Officer, Associate General Counsel for Privacy, Data & AI

Engineering


Artificial Intelligence (AI) has evolved and improved dramatically over the past 18 months, and the technology has myriad applications and amazing potential for the future of ecommerce. But with such lightning-quick advancements, it’s vital that we in the technology industry, especially companies like eBay with the size, scope, and personnel to guide the future of AI, ensure that this future is fair, ethical, and responsible. To find out more about how eBay has continued our work with responsible AI, and how the industry may progress, we spoke to eBay’s Chief Privacy Officer, Dr. Anna Zeiter.

 

Q: What are eBay’s major focuses in terms of responsible AI?

The most important mantra, the phrase we keep in mind as our top priority that will guide our actions, is that we want to do the right thing. We want to implement, develop and deploy AI in a reasonable and responsible way, and to follow an approach that balances speed and safety. 

Another really important aspect of eBay’s responsible AI approach is a theme consistent to all parts of our business, which is to be customer-focused. We have the customer, the user and also the employee in mind, which means we have to be human-centric. We’re working to ensure that we are transparent, that our AI tech is unbiased and carries no discriminatory outcomes, that it is consistent across eBay, and that user privacy is respected. All of our AI systems and models should be continuously tested, re-tested, and monitored to ensure that they remain fair and safe. 

We’re also committed to having human oversight in our AI work; we see AI as a tool to augment human capabilities, rather than to replace them. By keeping real people in the loop and also continuing to offer non-AI, human alternatives, we can ensure fairness. 

I think some companies see responsible AI as something that’s nice to have. At eBay, responsible AI is at the core of our AI work, which is really wide-ranging and poised to become even more so. It’s not a nice-to-have for us; it’s a requirement.

 

Q: What major projects have we done to date around responsible AI?

We have a global standard in place for responsible AI. We’re also in the process of establishing a responsible AI committee, with Nitzan Mekel-Bobrov, our Chief AI Officer, as the chair. Also on the team will be Mazen [Rawashdeh, eBay’s Chief Technology Officer], and we also hired a Senior Director for Responsible AI, who will be reporting to Nitzan. Her name is Lauren Wilcox; she comes to us from Google, and she’s starting in early September. The committee will begin operations after she arrives, as she’s going to be an important part of the process.

We have already trained employees, including sending out our “Leading With Integrity” ethics training for all employees, worldwide. Then we also have established guidelines for generative AI, especially in the use of chat, including our own internal version. We’re privileged to have the hardware, software, and personnel power to run our internal version, which in turn allows us to explore the possibilities of generative AI in a safe, controlled environment. At this early stage of the technology, it’s really beneficial to be able to work, iterate, and test privately before we roll out publicly.

Q: How do we create standards and guidelines around such a new and evolving tech?

We are currently in the process of drafting and establishing responsible AI policy. There are always three layers of compliance documents for all parts of our business: policy, which is very general; standards; and then guidelines. Our principles for creating these documents are to move fast, stay agile, and continue to be well-connected with peers in the industry. 

I’m on an AI advisory board, along with other CPOs in the industry, including those from Google, Microsoft, IBM, and others. Our objective there is to exchange best industry practices. We’re not aiming for perfect; perfect is not possible, given the nascent stage of AI and the fact that things are moving extremely fast. And we cannot let the pursuit of perfection keep us from implementing and following the best regulations available to us at any given moment.

 

Q: What makes eBay’s approach to responsible AI different from other tech companies?

Our history and the size of our team separates us from the pack; we have a history with AI going back 20 years, both using and developing. It’s also unique among our peer companies that we have such a big team. Other ecommerce sites tend to buy or rent AI as a service as opposed to developing their own. We invested heavily early on, and that’s paying dividends now. Privacy and security in AI isn’t something new that we need to grapple with. It’s something we’ve been thinking about and working on for years already.

People might think from the outside that eBay is only a marketplace, but we’re much more than that. We work in payments, advertising, first- and third-party integration, NFTs, marketing, fraud protection, anti-money laundering, all kinds of things. And it’s all AI-driven to some degree; over 200 teams at eBay are working on AI-related applications, spanning the entire company.

 

Q: How has eBay participated in the creation of legislation and regulation around responsible AI?

eBay for years has been a member of the IAPP, the International Association of Privacy Professionals, and I myself am a board member there. The privacy authorities are the ones who have already started working on enforcement in the AI field. So the IAPP started its own AI governance workstream earlier this year, and almost all of the big tech companies have their own CPOs in IAPP, as well as academics and other privacy professionals. 

We just had a meeting in Boston, in July, with the IAPP board, trying to figure out what a responsible AI framework looks like, how to work with regulators, which teams within companies need to be involved, and how we can roll out internal training and risk assessments — this is all coming very soon, and we need to be prepared. 

The most advanced law is the European Union AI Act, which is still a draft. The regulators, especially in Europe, feel that the time required to pass the law — likely a year or two — is far too long. So privacy regulators are already taking existing privacy laws, like GDPR, and applying principles within those laws, like the need for transparency, and applying them to AI. One example of that is being very clear about whether you are speaking, in a customer service situation, to a human or an AI. The Italian privacy regulators actually banned ChatGPT for noncompliance in spring, and ChatGPT had to implement changes to become available in Italy again.

SECONDARY Transparency Red PNG

This is a unique situation, because nobody, not regulators and not companies like eBay, are waiting around. We could theoretically say, you know, there’s no law yet, so we don’t have to do anything. But we’re not doing that. Instead, we’re implementing the draft European AI Act, even though it’s not yet law. Partly that’s because it’s the right thing to do, and partly it’s so we can be compliant and ready to go when we anticipate these laws will pass. That also requires that we work with US lawmakers and European lawmakers together, to ensure that when both entities pass laws, they’re consistent and not contradictory. 

eBay, like the other big tech companies, is extremely cognizant of the fact that there is the potential to make mistakes, and that those mistakes could be really damaging both to our customers and to our own reputations. So we are, in a sense, self-regulating much more severely than we legally need to, in order to be as careful and cautious as we can be. 

 

Q: How do we convey to sellers, buyers, and employees what our AI principles are?

We’re going to make sure to explain on our website our key overarching principles, like transparency, explainability, and human-in-the-loop protections. We want to ensure that a human being takes part in the training and refining of our models, and we think that’s both reassuring and a really effective way to safely utilize this tech. We’re also discussing internally some sort of AI labeling, so whenever something is created using AI, like in a product description generated from a photo, it’s properly labeled as an AI-generated item. This will need to be ironed out, of course; AI is going to be integrated very deeply into technology in the future, and it will be vital to delineate what we need to convey to users and employees. But we think we will be able to take advantage of the possibilities of AI without losing sight of the fact that we are humans, working to help other humans achieve their goals and connect with communities all around the world. 



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *