When you believe in things that you don’t understand

Posted on Tuesday, November 06, 2018

Last week, Telefonica announced that it was “one of the first companies in the world to have ethical Artificial Intelligence (AI) guidelines to guarantee a positive impact on society”. This is indeed the first time I have seen such a thing in the telecom industry, especially from a company that does much of its business in Latin America.

The document itself does not take much time to read; it is quite high level but then that is what ‘principles’ are supposed to be. I have summarized them even further below. Please review the webpage to get the complete picture.

  1. Fair AI: Ensuring that algorithms do not bias results unfairly on the basis of race, gender, religion, sexual orientation, etc.
  2. Transparent and explainable AI: Telefonica commits to be “explicit” about the kind of “personal and non-personal data” that its algorithms are using. The company also assures that “if the decisions significantly affect people’s lives, we will understand the logic behind the conclusions”.
  3. Human-centric AI: “AI should be at the service of society and generate tangible benefits for people.” Telefonica also says it is concerned about “fake news, technology addiction and the potential reinforcement of societal bias in algorithms”.
  4. Privacy and security by design: This is the longest and, perhaps unsurprisingly, most detailed ‘principle’ as it tries to cover multiple contingencies: personal and anonymous/aggregated analyses, company-developed or with partners, applicable law in the company’s multiple operating territories etc.
  5. Working with partners and third parties: Telefonica commits to apply the same principles to its supplier-provided solutions that it would apply to internally developed projects.

In a previous life, I spent a lot of time developing strategies, business principles and policy statements like this. One of my rules was always “would saying the opposite be a legitimate option?” If it wasn’t then there was nothing unique about the policy; it merely repeated a fundamental tautology. (“Make money”, “Serve customers”, “Obey the law” are examples.)

On the one hand, most of the statements here can be viewed similarly. For example, would a company really set out to build an AI platform that discriminated against anyone on the basis of “race, color or creed” as it used to be described?

On the other hand, experience has shown that many AI applications are biased, that managers do not understand and cannot explain the decisions that come out, that security has been weak, and companies have not understood what partners have done on ‘in their name’.

These principles are indeed often fundamental tautologies but, in the AI case, it has become necessary to restate them. Or maybe even just state them. Too much AI has masqueraded as innocent science experiments when it was, in fact, merely reinforcing the prejudices of the developers or have permitted others to hijack the tool and foist their prejudices on the world.

As a consequence, the issue of ‘ethical AI’ has received a lot of press recently as the technology has become more pervasive and some high-profile ‘scandals’ have raised serious questions. One internet giant in particular (Hint: the names starts with “F”) has had a particularly bad time of it lately as its targeting algorithms generate considerable bad press.

It is tempting to wonder if Telefonica developed this because the business people thought it necessary or if the legal team thought it was necessary or if the communications team thought it was necessary. Certainly, the communications people were involved – this is not some arid pdf written by the chief counsel – and probably all three groups contributed. But I have to ask myself what the balance was.

In particular, I wonder if any business leader would have naturally come up with “AI should be at the service of society and generate tangible benefits for people.” Last I looked, Telefonica’s fundamental purpose is to make money for its shareholders. If the company is developing AI applications, it is doing so to increase revenues, improve customer service or reduce costs. Even the lawyers would not have written that phrase.

That said, businesses suffer the consequences – PR and otherwise – when they are found to be mismanaging their ethical issues.

Over the years, I have had a number of conversations with Telefonica leaders about explicit strategies, business principles, call them what you like. TEF’s management philosophy is based on making clear decisions and communicating them to everyone that needs to be involved.

The communications team might have been heavily involved in assuring that the public and regulators would react positively to the publication. The lawyers may have been involved in making sure that that the right things were said.

But management understands that AI is an area where many mistakes have been made and made because the development teams were not clear on what the rules should be.

AI is not just a bunch of innocent science experiments and if you believe in things that you don’t understand – or do not try to put a framework around – then your superstitions will expose you and your clients to a host of troubles.

(Title Reference: This is the refrain from Stevie Wonder’s 1972 hit Superstition which hit number 1 in the US. Someday I plan to write an essay on the topic of understandable quantitative analysis including AI and I had expected to use this title for that essay. But the latter is only an idea and so it made sense to use the title for this one.)

No Comments »

Leave a Reply

Your email address will not be published.

© 2022 Macondo Telecom