Posts Tagged ‘declaration of conformity (DoC)’

Factsheets for AI

December 21, 2018

Depending on when you check in on the IBM website the primary technology trend for 2019 is quantum computing or hybrid clouds or blockchain, or artificial intelligence or any of a handful of others. Maybe IBM does have enough talented people, resources, and time to do it all well now. But somehow DancingDinosuar is dubious.

There is an old tech industry saying: you can have it right, fast, cheap—pick 2. When it comes to AI depending on your choices or patience you could win an attractive share of the projected $83 billion AI industry by 2021 or a share of the estimated $200 billion AI market by 2025, according to venturebeat.

IBM sees the technology industry at a pivotal moment in the path to mass adoption of artificial intelligence (AI). Google subsidiary DeepMind is leveraging AI to determine how to refer optometry patients. Haven Life is using AI to extend life insurance policies to people who wouldn’t traditionally be eligible, such as people with chronic illnesses and non-U.S. citizens. And Google self-driving car spinoff Waymo is tapping it to provide mobility to elderly and disabled people.

But despite the good AI is clearly capable of doing, doubts abound over its safety, transparency, and bias. IBM believes part of the problem is a lack of standard practices.

As a result, there’s no consistent, agreed-upon way AI services should be created, tested, trained, deployed, and evaluated, observes Aleksandra Mojsilovic, head of AI foundations at IBM Research and co-director of the AI Science for Social Good program. To clear up the ambiguity surrounding AI, Mojsilovic and colleagues propose voluntary factsheets or as more formally called Supplier’s Declaration of Conformity (DoC). The goal: increasing the transparency of particular AI services and engendering trust in them.

Such factsheets alone could enable a competitive advantage to AI offers in the marketplace. Such factsheets could provide explain-ability around susceptibility to adversarial attacks—issues that must be addressed in order for AI services to be trusted along with fairness and robustness, Mojsilovic continued. Factsheets take away the black box perception of AI and render the AI system understandable by both researchers and developers.

Several core pillars form the basis for trust in AI systems: fairness, robustness, and explain-ability, the first 3 pillars.  Late in her piece, Mojsilovic introduces a fourth pillar — lineage — which concerns AI systems’ history. Factsheets would answer questions ranging from system operation and training data to underlying algorithms, test setups and results, performance benchmarks, fairness and robustness checks, intended uses, maintenance, and retraining. More granular topics might include governance strategies used to track the AI service’s data workflow, the methodologies used in testing, and bias mitigations performed on the dataset. But in Mojsilovic’s view, documents detailing the ins and outs of systems would go a long way to maintaining the public’s faith in AI.

For natural language processing algorithms specifically, the researchers propose data statements that would show how an algorithm might be generalized, how it might be deployed, and what biases it might contain.

Natural language processing systems aren’t as fraught with controversy as, say, facial recognition, but they’ve come under fire for their susceptibility to bias.  IBM, Microsoft, Accenture, Facebook, and others are actively working on automated tools that detect and minimize bias, and companies like Speechmatics and Nuance have developed solutions specifically aimed at minimizing the so-called accent gap — the tendency of voice recognition models to skew toward speakers from certain regions. But in Mojsilovic’s view, documents detailing the ins and outs of systems—factsheets–would go a long way to restoring the public’s faith in AI.

Fairness, safety, reliability, explain-ability, robustness, accountability — all agree that they are critical. Yet, to achieve trust in AI, making progress on these issues alone will not be enough; it must be accompanied with the ability to measure and communicate the performance levels of a system on each of these dimensions, she wrote. Understanding and evaluating AI systems is an issue of utmost importance for the AI community, an issue IBM believes the industry, academia, and AI practitioners should be working on together.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at technologywriter.com.


%d bloggers like this: