The Future of AI/ML—Taking a Human-Centric Approach to Innovation

As AI/ML advances, developers have the responsibility and opportunity to ensure machine-made decisions are human led.

Ensuring Machine-Made Decisions are Human Led

 

What really happens when you send a text, attach a document to email, or send commands from an app on your phone? You’re generating data—and lots of it. At the beginning of the decade, the average person generated 1.7 MB of data every second, and it’s exponentially increasing over time. Rapidly advancing computers continue to process information at faster speeds, but what can we do with all these results?

 

The rise of artificial intelligence (AI) and machine learning (ML) are helping us leverage results from the data we produce to improve technology and our daily lives. AI/ML could also become more integral to creating and testing products in the future. As real-world advances in AI/ML help unleash new capabilities, we have a tremendous responsibility to ensure our machines are helping us make the best decisions for the greater good of society.

 

In a recent Testing 1, 2, 3 episode, we hosted Mike Tamir, Head of Data Science and AI for the Susquehanna International Group, and Nuria Oliver, Co-Founder and Director of the Institute of Human Centered AI. The discussion explored pressing questions beyond the basics of “what is machine learning,” and gave us deeper insight into the future of these powerful data tools.

 

 

Training Data Machines 

 

We’ve seen autonomous and electric vehicles begin to interact more intelligently with their surroundings—and data is at the heart of this evolution. Telemetry installed in cars helps record and transmit data about a car’s environment through images. Sensing advances with radar and lidar technology that can detect, track, and image objects using the sound signals and deflection patterns from lasers can produce a three-dimensional, unstructured image of the surroundings. Machine learning happens when captured data is processed, stored, queried, and used to inform a car’s executions.

 

Nuria Oliver shared that capturing human behavior data also helps refine models for autonomous vehicles but finding processes to initially obtain the data can be laborious. When humans started test driving autonomous vehicles, copilots trained to prevent accidents would annotate the driver’s maneuvers in real time. This supervised annotation provided meaning to the data collected by different sensors in the vehicle, resulting in dynamic graphic models.

 

Over the decades, we’ve built better tools and machines to not only capture data, but also to analyze it so the results are more useful. Tamir explained that data scientists and machine learning researchers are always trying to understand how data “matches” or how patterns in data can be linked to “semantically significant representations of data” such as unstructured images or text.

 

Whether it’s for a self-driving car, a photo editing app, or another application, training machines to sift through mountains of data and analyze results more effectively will help advance product capabilities. We’re only at the beginning of understanding how AI/ML has empowered machines to move beyond what human brains can do. But how exactly can we develop tests to evaluate AI if the tasks are beyond human ability?

 

 

Finding Failure to Identify Normal Representations

 

Approaching test through failure detection or intentionally looking for things out of the ordinary can help determine normal representation and give a basis for repeatable test data. Machine learning is frequently being used for things like predictive maintenance to measure when exactly a system might go down and reduce downtime. For example, telemetry sensors can be installed on large machines like wind or tidal turbines used for clean energy to measure vibrations that might indicate a failure. Likewise, instrumentation can be attached to cell phones to predict disruptions or glitches. Deep learning techniques tend to identify these signals faster by defining algorithms through data, rather than hand-coded algorithms input by humans.

 

Tamir explained, “When we’re talking about anomaly detection, you’re taking a sequence of very complex data to create a representation of normal behavior. That distillation of data guides what normal looks like.”

 

 

Heeding Caution with AI/ML

 

AI/ML will have a tremendous societal benefit in transportation, healthcare, security, and beyond—but we must be cautious about the consequences when technology is entangled in people’s lives. Health-related artificial intelligence (AI) applications are already being used to monitor health, activity, and well-being in nursing homes. Passive Wi-Fi sensing systems can measure movements and vital signs of a patient through walls without any physical contact to detect daily activities and even life-threatening events, so this signal data must be accurate and reliable to provide the best quality of care.

 

As Oliver affirmed, training models with data to make decisions can sometimes remove human error and might be a more objective representation of reality: “Humans have a lot of unconscious biases. We are susceptible to corruption. We have enemies and friends. We are tired, we get hungry, we have a bad day. We have so many reasons as to why our decisions might not be optimal.”

 

AI/ML offers tremendous opportunities to analyze data at a scale never seen before, but Oliver warned, “These systems are not foolproof, and they can be fooled. And you can also generate a lot of artificial data that looks like real data… and you can totally confuse the systems and so forth.”

 

Bad data can lead to bad results; and if you trust systems blindly, you can be fooled as well. Anthony Scriffignano, Chief Data Scientist at Dun & Bradstreet, alluded similarly, “Be very careful. It’s not your data that’s going to get you in trouble. It’s what you believe that’s going to get you in trouble.”

 

 

Looking Into an AI/ML Enabled Future

 

There are age-old fears that AI might take over the world or steal jobs, but it’s detrimental to see this technology as a total threat, Oliver remarked. “The key to creating these systems with people at the center is to have a critical view when evaluating them. Instead of viewing these systems as a threat to modern professions previously thought to be unaffected by machine learning and AI systems, education of these systems and how to interpret them will be key going forward.”

 

The future of machine learning, according to Oliver, will require human supervision and input, along with analysis of the data that is produced through these systems. Oliver noted, “I do think that we have a tremendous opportunity to increase our potential and to find synergies between humans and human abilities and computers and their abilities in this concept of augmentation of human intelligence.”

 

Hear more from Mike Tamir and Nuri Oliver about the AI/ML revolution in the full podcast episode on Testing 1, 2, 3. 

For more insights, listen to other episodes from season 2 of our Testing, 1, 2, 3 podcast. This engineering podcast connects you to tech leaders discussing how test plays a pivotal role in solving society’s biggest challenges—now and in the future.