The Wall Street Journal Future of Everything Festival just ended. Socialite Paris Hilton was there. Contemporary Chinese artist and activist Ai Weiwei was there. And, Shared Assessments was there to take in a few sessions.
The session most relevant to Risk Management and Privacy was an interview with Marian Croak, new VP of Engineering at Google. Croak discussed “Consumer Trust and Inherent Bias in Tech” focusing on “Ethical Artificial Intelligence.”
Croak spoke to how Google both uses and furthers ethical AI. Croak described Google as approaching responsible AI using two different work streams:
Google’s past research on responsible AI has been diffused (rather than focused). A nascent field, Google founded principles around ethical AI only 4-5 years ago. Organizations and people are just now forming normative definitions of fairness or privacy and ensuring these factors are measurable.
As we enter a moment of higher ethical awareness, Croak described ethical, good work and business motives as being entwined. Being responsible in the way you develop and deploy technologies (including AI) is fundamental to the good of the business. It supports the image of the brand; there is no dichotomy between ethics and business.
Croak identified the need to involve people impacted by AI starting in the initial phases of product conceptualization. Croak stated that all throughout the product cycle Google is “asking questions, testing, and involving the people we are trying to serve.” Google uses model cards to deepen their understanding of how people will be involved or impacted by their products.
Model Cards are a tool for model transparency that provide a structured framework for reporting on a machine learning (ML) model’s provenance, usage and ethics. Model cards offer an evaluation of a model’s best uses and limitations. Through benchmarking, model cards can reveal cases where an application of ML is not used in the right context – for example, across different cultural, demographics or phenotypic groups (race, geographic location, sex).
Model cards fit into the “Responsible AI” category Croak mentioned – they are a methodological contribution guiding how all organizations should conduct technological research and development, especially when using AI and ML. Google’s solutions in the “AI for Social Good” category include:
Croaks resounding message was that we cannot divorce social context from technology. As Google, as organizations, as individuals, we must remain aware of the culture we live and work in, and implement responsible practices that raise the collective ethical level.
Charlie Miller, Senior Advisor, Shared Assessments, reflected on the lessons Croak imparted: “Ethical Artificial Intelligence (AI) is a complex emerging futuristic view of how technologies can benefit humanity. To ensure Ethical AI succeeds, it is critical to note that intentional and unintentional bias is minimized as it can be introduced in AI models at many points along the way including: the data being selected and used, the diversity of AI development team, validation of the model’s outputs to ensure results align with expected outcomes and are not exposed to any interpretation biases. It will be worth keeping an eye on Marian Croak and the Google team to follow their internal success and ability to extend processes to other organizations and industries.”