Mahi Sall, Advisor, Fintech-Bank Partnerships, Payments and Financial Inclusivity
January 25th, 2023
Jericho Chambers | by Margaret Heffernan | March 2019
Some years ago, I joined a conference hosted by one of Silicon Valley’s top venture capitalists. It was the kind of event I knew well, when all the CEOs funded by the VC gather together to compare notes on what they’re seeing in the market place and are provided with (or subjected to) some pearls of wisdom from those with more experience.
Once upon a time, I was one of those CEOs, now I was supposed to dispense wisdom about willful blindness. I don’t remember much about the conference beyond its opening which went something along these lines:
There is nothing wrong with medicine that getting rid of doctors won’t fix.
There is nothing wrong with education that getting rid of teachers won’t fix.
There’s nothing wrong with the legal system that getting rid of lawyers won’t fix.
… our host stopped just short of saying there was nothing wrong with democracy that getting rid of voters won’t fix….
I sat up and paid attention. This was not the tech sector as I’d known it. All those years back in what I’ve come to think of as Internet 1.0 we had been full of curiosity, wondering what new wonders we could invent with this new tech and how we might develop it to enable, empower and even liberate everyone.
“We were idealists, we were naïve”
We hadn’t gone into it to make money – there wasn’t any in the beginning. And we hadn’t gone into it for power – there wasn’t much of that either. We were idealists, we were naïve.
How things had changed. Now the full force of Big Tech was marshalling its overwhelming power against … teachers? Doctors? This was definitely different, a calculated attack on society as we know it.
Since the early days tech has become so much bigger, richer, more aggressive. It tricks kids into spending money, newly weds and parents into buying stuff they didn’t know they were buying, recruiting subjects for experiments no one knew they were part of, siphoning off data. Who knows what else can be laid against its door?
This is not the way to build trust. It is a way to build distrust. And nowhere is that deeper than in discussions about artificial intelligence.
But first, what do I mean by trust? According to Veronica Hope-Hailey’s rich research, it is a mix of 4 ingredients
But even in its early phase, these four qualities are notably absent in corporate experiments with AI. Instead, there are a whole bunch of problems that, thanks to diligent investigators, keep being brought to light.
No consent
Earlier this year, it was revealed that Pearson, a major AI-education vendor, inserted “social-psychological interventions” into one of its commercial learning software programs to test how 9,000 students would respond. They did this without the consent or knowledge of students, parents, or teachers.115
This doesn’t appear to be consistent with Pearson’s values nor does it exude integrity.
Bias
96 percent of the world’s code has been written by men. And there’s an army of scientists (if you need them) who have demonstrated that WHAT is made/ written is a reflection of those who make it. (In art, this is deemed a virtue.) So it means that we already start in a deeply troubled, profoundly unrepresentative place.
Using AI for hiring selection has been shown – by Amazon – to be ineradicably biased because, trained on historical data of overwhelmingly male employees, it selects…overwhelmingly male employees, and deselects women. After 2 years of trying to fix this, even Amazon has given up. But not all companies have. So when AI meets diversity policy, what you get is … at least NOT consistent.
Inadequate data sets
Deducing mood, psychological state, sexual orientation, intelligence, likelihood of paedophilia or terrorism through “analyzing” facial expression keeps being shown to be inaccurate, outdated, biased, and based on inaccurate datasets. This is not benevolence.
Training AI to seek “criminal face” using data consisting of prisoners’ faces is not representative of crime, but rather, represents which criminals have been caught, jailed, and photographed. A high percentage of criminals are never caught or punished, so the dataset cannot adequately represent crime itself. We are back in the land of phrenology…or worse.
We all know there are some scientific research programs best not pursued—and this might be one of them.
Illegality
Hiring AI used by fast food companies has been shown by Cathy O’Neill to screen out those with any hint of history of mental illness – a move specifically prohibited by the Americans with Disabilities Act. But attempts to investigate this so far have been stalled by claims that the AI is a trade secret and cannot be disclosed. One law for humans – another for machines. This isn’t integrity in action.
Socially deaf definitions of success
Attempts to use AI to allocate school places more fairly in Boston backfired completely when it turned out that nobody programming the AI had the faintest idea – let alone experience – of the way that poor families live, the timetables that working 3 jobs imposes on them and their children. Not only were results worse – they came wrapped in insult. This isn’t benevolent and it was, frankly, incompetent.
These are real cases. There are more. Each one might be nit-picked apart but the key issue is this:
AI oversteps a fundamental boundary between objective analysis and moral judgment.
When such moral judgments are made, people deserve a chance to understand and to validate or to contest them. Claims of trade secrecy specifically militate against this. Ethical issues are treated as legal and policy issues – a way of sidelining them that is completely synonymous with the way that hierarchies and bureaucracies facilitate, indeed drive, wilful blindness in organizations.
AI simply amplifies both the risk and the lack of accountability to an unimaginable scale. You could say it delivers the status quo PLUS. So AI has the capacity to increase and sustain marginalization, corporate malfeasance and inequality.
All of the companies involved in developing it know this. That’s why there’s a whole roster of organizations all trying to figure out how to make AI the commercial goldrush it promises to be – while also hoping to silence fears that anything could possibly go wrong. But there are 2 difficulties with their approach:
“It would be both naive and counterproductive to say law enforcement shouldn’t have these new technologies. They’re going to, and I think they’re going to need them. We can’t have police in the 2020s policing with technologies from the 1990s”[1]
“‘Sit down. Shut up. When I want something from you, I’ll ask’, is the tenor of so-called public consultation”
This is the language of propaganda: telling people that these new technologies are inevitable- when they aren’t – and that they’re unequivocally productive – when they aren’t – and therefore there is no need, no POINT in asking questions. ‘Sit down. Shut up. When I want something from you, I’ll ask’ is the tenor of so-called public consultation.
The National Crowdfunding & Fintech Association (NCFA Canada) is a financial innovation ecosystem that provides education, market intelligence, industry stewardship, networking and funding opportunities and services to thousands of community members and works closely with industry, government, partners and affiliates to create a vibrant and innovative fintech and funding industry in Canada. Decentralized and distributed, NCFA is engaged with global stakeholders and helps incubate projects and investment in fintech, alternative finance, crowdfunding, peer-to-peer finance, payments, digital assets and tokens, blockchain, cryptocurrency, regtech, and insurtech sectors. Join Canada's Fintech & Funding Community today FREE! Or become a contributing member and get perks. For more information, please visit: www.ncfacanada.org
Support NCFA by Following us on Twitter!Follow @NCFACanada |
Leave a Reply