By Siddharth Pai
In the past, I have written on the various aspects of cognitive science—or the science of learning – that are used by researchers in the artificial intelligence (AI) field as they attempt to build in more ‘learning’ capability into the software they write. The hope is that this software will learn more by itself, without additional programming, as it runs in real world applications. Remember, AI is distinct from automation, which simply seeks to mechanise repetitive tasks that are now performed by human beings.
My first musings on the limitations of cognitive science ensued from the conversations I had some years ago with John Fox, a former professor in the department of engineering science at the University of Oxford. Fox, an interdisciplinary scientist, was working on reasoning, decision-making, and other theories of natural, and artificial cognition. He said to me that psychologists have known for years that human decision making is flawed, even if sometimes amazingly creative, and overconfidence is an important source of error in routine human care-giving settings. A large part of the motivation for applying impartial AI in medicine or human caregiving comes from the knowledge that to err is human and that overconfidence is an established cause of clinical mistakes. Overconfidence is a human failing and not that of a machine; it has a huge influence on our personal and collective successes and failures.
Fox told me that he made an error in thinking that AI was like other sciences that support medicine. It is taken for granted that medical equipment and drug companies have a duty of care to show that their products are effective and safe before they are released for commercial use. He assumed that AI researchers would similarly recognise that they have a duty of care to all those potentially affected by poor engineering or misuse in safety critical settings. He now realises that these assumptions were naïve.
Those building commercial tools based on the technologies derived from AI research have to date focused just on enthusiastic marketing in order to get customers in. Safety has taken a back seat. Considering that the insertion of AI applications into medicine in the first place was based on the presumption that AI could counter the human failing of overconfidence, Fox continues to be surprised how optimistic software developers are. He says that they always seem to have supreme confidence that worst-case scenarios won’t happen, or that if they do happen then their management is someone else’s responsibility. In contrast, pharmaceutical companies are tightly regulated to make sure they fulfill their duty of care obligations to their customers and patients. Proving drugs are safe is an expensive process and runs the risk of revealing that a claimed wonder-drug isn’t effective.
But—there is also an important distinction between automation and AI. There are still plenty of tasks that are now managed by humans—such as reporting on child-care and elder-care outcomes that can simply be streamlined and automated, without ever using a single concept from the world of AI. These ‘low hanging fruit’ can provide a palpable uptick from the status quo, and vastly improve human caregiving.
For instance, in a recent conversation with Anoop Baliga, a product manager at US-based Binti Inc., I learned that his firm has been focused on improving child foster care outcomes in the US. The world certainly has millions more children in orphanages than it ought to, and this seems an impactful area to focus on. The US, being a developed nation, has a rather active government department of health and human services, and this department is responsible for the over 400,000 children who are taken from their biological parents (or elsewhere) and placed with other families or group homes so that they can be given foster care in a presumably safer environment.
However, the statistics from foster care in the US are quite dire. 30% of homeless people are former foster youth. 50% of foster youth will be homeless at some point in their life and 25% of prison inmates are former foster youth. And sadly, the majority of sexually trafficked youth are current or former foster youth.
Binti has seized upon the technology opportunity in child welfare. Currently most child welfare agencies rely on technology built in the 1990s from custom consulting projects or platform-based solutions from firms that don’t specialise in child welfare. In addition, most of these were built with compliance goals in mind, and not necessarily the welfare of each child placed in the foster-care network. One would hope that a fresh look at these systems or their replacement with newer technology would result in better outcomes for children than the horrendous ones on display in the previous paragraph.
Binti claims to be focused on reinventing child welfare and to believe that every child should have a safe, stable, and loving family. In the US, the firm currently works across 30 states with some 200 agencies that serve more than 100,000 children in care. It uses a “Software-as-a-Service” model to allow regular releases with new features & updates, Binti also has email/chat/phone live support and are built by a team with direct experience in child welfare services.
This doesn’t require sophisticated AI; it needs an overhaul of existing systems at US child welfare agencies. Let’s not ignore the low-hanging fruit that can provide vastly better outcomes by getting simple modernisation right by having stars in our eyes about the promise of AI.
The writer is technology consultant and venture capitalist