AI and Machine Learning: Perspectives with FPF’s Brenda Leong
As we prepare to toast our 10th anniversary, we’re hearing from FPF policy experts about important privacy issues. Today, Brenda Leong, FPF Senior Counsel and Director of Strategy, is sharing her perspective on AI and machine learning. Brenda also manages the FPF portfolio on biometrics, particularly facial recognition, and oversees strategic planning for the organization.
Tell us what you think the next 10 years of AI, machine learning and privacy will bring.
Our 10th anniversary celebration will be on April 30. RSVP here.
How did you come to join the Future of Privacy Forum and work on AI and machine learning privacy issues?
My first career was in the Air Force, and my last two assignments before I retired were at the Pentagon and the State Department. I learned that I really enjoy working on policy, and I decided to explore new policy areas after I retired from the military. I went to law school at George Mason University, where I became very interested in telecom issues and privacy law.
People kept telling me, if you want to work in privacy in Washington, DC, you need to meet Jules Polonetsky. So I went to a policy event and cornered Jules. That led to an FPF policy fellowship and I’ve been at FPF ever since – almost five years.
About a year after I joined FPF, Jules – who is an expert prognosticator – suggested we should learn more about AI because it was becoming a focus of the tech industry, incorporated into autonomous vehicles, facial recognition, advertising tech and a lot of other areas. I jumped at the chance and I’ve been working on AI and machine learning issues ever since.
What’s the difference between AI and machine learning?
That’s a good question, and something we explored in The Privacy Expert’s Guide to AI and Machine Learning, which FPF released last October. Most of what has been implemented is machine learning – algorithms that can evaluate their own output and make adjustments to their code without human involvement. Machine learning is used in image recognition, facial recognition, sensory inputs for autonomous vehicles, and many other tasks.
I like the definition of artificial intelligence by Stuart Russell, who wrote one of the key textbooks in this space:
“An entity is intelligent to the extent that it does the right thing, meaning that its actions are expected to achieve its objectives… This notion of doing the right thing is the key unifying principle of AI. When we break this principle down and look deeply at what is required to do the right thing in the real world, we realize that a successful AI system needs some key abilities, including perception, vision, speech recognition, and action.”
There aren’t yet many real-world applications for classic AI that meet that definition. Real-time language translation and email spam blocking come to mind. By the way, Russell’s quote is from Architects of Intelligence: The truth about AI from the people building it by Martin Ford – the current FPF Privacy Book Club selection. Anyone can join the book club and participate in our discussion on February 27.
What are some of the privacy issues around machine learning?
Some machine learning requires almost unimaginable amounts of data – millions of records. Traditional privacy practices emphasize data minimization, where you only collect the data you need for one purpose and keep it only as long as necessary for that purpose. However, data minimization is tough to reconcile with machine learning that needs lots of data, sometimes personal data.
There also can be issues of bias and fairness. Some people are concerned about what a company might do with a profile about them. Even without my name, machine learning can help a company come up with things about me that I don’t know it knows, or even things I don’t know about myself. An example would be an analysis of shopping data about people with similar profiles – that may be very accurate at predicting my preferences and behavior.
If a machine learning program is modelling off existing data sets, it can amplify biases that were in the original human-selected data. In that situation, the algorithm has to change to detect and adjust for bias in the data set. In that way, better math is part of the solution. Computer scientists tell us no system is without bias. The point is to understand what biases you have chosen, what priorities are built into the system and whether that will give you the results you want.
When we’re talking about race or ethnicity, some people ask, “can’t you just take that data out?” but it’s not that easy because many other data fields tend to correlate with race. You need to understand the bias in the data and adjust for it.
Another set of concerns are around transparency. People want to understand how their data is being used – that’s a key privacy practice. But that can be difficult if the algorithm can change itself. In machine learning, the program is constantly evolving. That makes it challenging to pick out a moment and determine why the program generated a specific result at that time. So traditional transparency analysis, which follows the steps for data use precisely, is hard to do with machine learning. There are ways to analyze it using math and statistics, but they can be tough to understand, which limits transparency.
What has FPF’s AI and Machine Learning Working Group been up to?
The working group brings together FPF members to stay abreast of how AI and machine learning are being used, learn from outside experts, and review and contribute to FPF documents.
We often have speakers come in to talk about AI and where it is headed. For example, we recently had a computer scientist come in and talk about AI and bias. Our presentations and discussions help the legal and policy people – who tend to be involved with FPF – better understand the technology and how it is being used so they are well-informed in discussions in their companies about products and services that their designers are building.
We also get input from working group members on our publications, like The Privacy Expert’s Guide to AI and Machine Learning, Beyond Explainability: A Practical Guide to Managing Risk in Machine Learning Models and our publications about facial recognition. The AI and Machine Learning Working Group members have tremendous expertise. It’s great to learn from them and share their perspective with our broader membership and the public.