IBM Research AI Predictions For 2019
IBM has been pioneering the field of artificial intelligence (AI) since its inception, and the company's 2018 retrospective released roday provides a sneak-peek into the future of AI.
IBM has curated a collection of one hundred IBM Research AI papers that the company has published this year, authored by researchers and scientists from IBM's twelve global Labs.
We highlight some of this year’s work in three areas – advancing, scaling and trusting AI.
Advancing AI
- IBM Research AI introduced a Machine Listening Comprehension capability for argumentative content. Stemming from IBM's work on Project Debater, this functionality extends current AI speech comprehension capabilities beyond simple question answering tasks, enabling machines to better understand when people are making arguments.
- If you’ve seen one, you’ve seen them all: Today’s AI methods often require thousands or millions of labeled images to accurately train a visual recognition model, IBM Research AI developed a “few-shot” learning method that can accurately recognize new objects from as few as one example, with no additional data or labeling required. This capability extends the applicability of AI to data-constrained domains.
- The student becomes the master: This year, IBM researchers presented a framework and algorithm to enable AI agents to learn to teach one another and work as a team. By exchanging knowledge, agents are able to learn faster than previous methods and, in some cases, they can learn to coordinate where existing methods fail.
Questions and Answers: IBM Research AI detailed an enhancement to open-domain question answering (QA) approaches, with a new method that re-ranks and aggregates evidence across multiple passages to produce more accurate answers. The team has achieved substantial improvements over previous state-of-the-art approaches on public open-domain QA datasets.
Trusting AI
- The battle to banish bias: As AI systems are increasingly used for decision support, it is imperative that AI systems are fair and unbiased. However, eliminating bias is challenging, since the data used to train AI systems often contains intrinsic societal and institutional biases and correlations that statistical learning methods capture and recapitulate. IBM Research AI outlined a new approach for combating bias, wherein training data are transformed so as to minimize bias, such that any AI algorithm that later learns from it will perpetuate as little inequity as possible. In applying this method to two large, public datasets, our team was able to substantially reduce unwanted group discrimination, without significant reduction in the system’s accuracy.
- Deep neural networks are in many ways black boxes—even when a network arrives at a correct decision, it is often difficult to understand why that decision was made. This inherent lack of explainability presents a barrier to user trust in AI systems and makes it difficult to reason about potential failure modes. To tackle these problems, IBM Research AI scientists developed a new machine learning methodology called ProfWeight, which probes a deep network and constructs a simpler model that can reach similar performance as the original network. By virtue of their reduced complexity, these simpler models can provide insights into how the original network worked and why it made one decision versus another. In testing this methodology on two massive datasets, the ProfWeight model was able to produce more explainable decisions, while maintaining a high level of accuracy.
- Anticipating adversarial attacks: Today’s machine learning models can achieve unprecedented prediction accuracy, but they are also surprisingly vulnerable to being fooled by carefully-crafted malicious inputs called “adversarial examples.” For instance, a hacker can imperceptibly alter an image such that a deep learning model is fooled into classifying it into any category the attacker desires. New attacks of this sort are being developed every day across a wide range of tasks, from speech recognition to natural language processing. As a key step toward safeguarding against these attacks, IBM Research AI has proposed a new attack-agnostic, certified robustness measure called CLEVER (Cross Lipschitz Extreme Value for nEtwork Robustness) that can be used to evaluate the robustness of a neural network against attack. The CLEVER score estimates the minimum attack “strength” required for an attack to be successful at fooling a given deep network model, making it easier to reason about the security of AI models, and providing directions for detecting and defending against attacks in deployed systems.
Scaling AI
- 8-bit precision accelerates training: Deep learning models are extremely powerful, but training them requires enormous computational resources. In 2015, IBM presented a paper describing how to train deep learning models using 16-bit precision (half the more typically used 32-bit precision) with no loss of accuracy. IBM researchers have now demonstrated for the first time the ability to train deep learning models with just 8-bits of precision, while fully preserving model accuracy across all major AI dataset categories, including image, speech, and text. These techniques accelerate training time for deep neural networks by 2-4x over today’s 16-bit systems. Although it was previously considered infeasible to further reduce precision for training, IBM expects that the 8-bit training platform to become a widely adopted industry standard in the coming years.
- New neural net approach on the Block: BlockDrop, a new way to speed up inference in very deep neural networks, learns to choose which layers or “blocks” of the deep network to skip, reducing the total computation while retaining accuracy. Using BlockDrop, an inference speedup of twenty percent is attained on average, going as high as thirty-six percent for some inputs, while maintaining the same top-1 accuracy on ImageNet.
- Design within reach: IBM researchers developed a new neural architecture search technique that reduces the heavy lifting required to design a neural network. The method defines repeating neural architecture patterns called “neuro-cells,” which are subsequently improved through an evolutionary process. This evolution can design neural architectures that achieve high prediction accuracy on image classification tasks, without human intervention, in some cases attaining speedup of up to 50,000x compared to prior methods for neural architecture search.
IBM expects that ext year will bring even more progress for the AI industry. Here are three trends to watching out:
- Causality will increasingly replace correlations: Everyone knows that the rooster’s crowing at dawn does not “cause” the sun to rise, and that conversely, flipping a switch does cause a light to turn on. While such intuitions about the causal structure of the world are integral to our everyday actions and judgments, most of our AI methods today are fundamentally based on correlations and lack a deep understanding of causality. Emerging causal inference methods allow us to infer causal structures from data, to efficiently select interventions to test putative causal relationships, and to make better decisions by leveraging knowledge of causal structure. In 2019, expect causal modeling techniques to emerge as central players in the world of AI.
- Trusted AI will take center stage: This year, a number of organizations responded to data breaches and consumer privacy concerns by establishing ethics advisory boards, and we’ve seen increased research investment in the “pillars of trust” (algorithmic fairness, explainability, robustness, transparency), along with increased efforts in deploying AI for social good. In 2019, we’ll begin to see these efforts become central to how companies build, train and deploy AI technologies. We expect to see special focus on transferring research advances in this space into real products and platforms, along with an emphasis on encouraging diversity and inclusion on technical teams, to ensure that many voices and perspectives guide technological progress.
- Quantum could give AI an assist: In 2019 we’ll see accelerated traction in quantum experimentation and research, and new research on how quantum computing can potentially play a role in training and running AI models. A core element of quantum algorithms is the exploitation of exponentially large quantum state spaces through controllable entanglement and interference. As the complexity of AI problems grows, quantum computing—which thousands of organizations are already accessing via IBM’s cloud quantum computing services—could potentially change how we approach AI computational tasks.