Philosophy Complicates The Convergence Story

Frank Diana
2 min readSep 17, 2020

In a post from last year, I focused on the Convergence story. This story has one foot in the past, and another in the future. The realization of great advancements in human development during the late 19th and early 20th centuries is a convergence story. A period of great invention converged with other domains to enable our modern society. As we stand at the threshold of another period of great invention, the convergence story is more complicated. This time, complexity is added by the introduction of two new domains: philosophy and environment.

The World Economic Forum (WEF) recently explored the philosophy domain; or more specifically, ethics in the context of artificial intelligence. Artificial Intelligence is already being called a general-purpose technology that is comparable to electricity. According to the WEF article, over the past decade, artificial intelligence (AI) has emerged as the software engine that drives the Fourth Industrial Revolution, a technological force that affects all disciplines, economies, and industries. As I mentioned in my earlier post, all indications point to another period of astounding innovation, and AI sits at the heart of much of it.

A growing area of concern as AI increasingly penetrates various aspects of our lives is ethics. As the article describes, progress to date is remarkable, but it also creates unique challenges. The authors reference various studies that stress the need for proper oversight to avoid replicating or exacerbating human bias and discrimination. There are already emerging stories of issues in areas like criminal justice, healthcare, banking, and employment. We lack global governance mechanisms that manage the path of this emerging industrial revolution. When we consider the impact of blurring boundaries between the physical, digital, and biological spheres, we can see the magnitude of the issue.

With no global governance, there is a lack of consensus about the oversight processes that should be introduced to ensure the trustworthy deployment of AI systems. The WEF Article proposes the introduction of risk/benefit assessment frameworks to identify and mitigate risks in AI systems. They articulate twelve considerations for such a framework, listed below. Please see the article for a deeper dive:

1. Justify the choice of introducing an AI-powered service

2. Adopt a multi-stakeholder approach

3. Consider relevant regulations and build on existing best practices

4. Apply risks/benefits assessment frameworks across the lifecycle of AI-powered services

5. Adopt a user-centric and use case-based approach

6. Clearly lay out a risk prioritization scheme

7. Define performance metrics

8. Define operational roles

9. Specify data requirements and flows

10. Specify lines of accountability

11. Support a culture of experimentation

12. Create educational resources

Originally published at http://frankdiana.net on September 17, 2020.

--

--

Frank Diana

TCS Executive focused on the rapid evolution of society and business. Fascinated by the view of the world in the next decade and beyond https://frankdiana.net/