How to Ensure Trust in a Digital World
We have a trust issue. In our digital world, it has become increasingly difficult to trust each other. Whether it is another person, an organisation or a device, trust is no longer a given online. This is a serious problem for our society and our democracy. If trust is lacking in society, anarchy can be expected.
The problem is, the more digital our society becomes, the harder it seems to trust each other. With technology being neutral, it is up to the user to determine whether to apply technology for the good or the bad. For example, artificial intelligence can be used to protect rainforests and endangered animals, or you can use it to automate your hacking activities. Cryptocurrencies can be used to bring down the costs of remittances, or you can use it to make money using ransomware. Big data can be used to find patterns to detect cancer or to increase your surveillance activities. Unfortunately, there is an increase in using technology in harmful ways, thereby negatively affecting trust.
Unfortunately, existing organisations do not help either. Facebook refuses to remove fake ads from its platform, while Google has a secret agenda when collaborating with organisations (collect data to make money with ads). The Russian FaceApp got access to millions of people’s faces and apparently could use it in any way they wanted and, finally, TikTok is under scrutiny for deleting users’ videos. Add to this the rise in fake news, armies of trolls and click farms, and it is clear that we have a trust problem.
A Contract for the Web
Recently, Sir Tim Berners-Lee launched a plan to save the web from such “political manipulation, fake news, privacy violations and other malign forces that threaten to plunge the world into a digital dystopia”. This Contract for the Web aims to commit governments, organisations and citizens to safeguard the web. Although it is a great initiative, it will probably take more than just a commitment to safeguarding the web.
To understand the trust problem, we have to look at what drives online activity: money. As long as Facebook earns billions of dollars with (fake) ads, they have minimal incentive to change their policies. Shareholders demand continued growth in shareholder price and dividend and Facebook happily complies, even if that means allowing fake ads on the platform.
Therefore, to regain trust in technology, we would need more than just a contract for the web. We would need a change in mindset. We would need regulation that requires organisations to ban fake ads, fake bots, fake news, prohibit deepfakes all together or at least requiring disclosure when publishing fake content. Above all, we need collaboration and bring accountability to the web.
Trust in Tech
Regaining trust in technology is a shared responsibility. Organisations, governments and consumers have to work together to ensure trust in a digital world. Vulnerabilities, privacy violations or data leaks cannot be solved in isolation. Organisations that refuse to comply (and, for example, refuse to remove fake ads), should face the consequences of it. Organisations that can prove ethical and trustworthy behaviour should be rewarded.
This week, Huawei invited me to join their Trust in Tech symposium. In the Western World, the perception is that Huawei has a bad reputation when it comes to trust, which Huawei denies. This was their first conference to encourage a discussion on trust. But Huawei is doing more. To proof their trustworthiness, Huawei has offered their 5G core technology for sale to a Western buyer, thereby effectively creating a competitor. For a one-time fee, the buyer would get access to existing 5g patents, licences, code, technical blueprints and production know-how. A bold statement to prove their trustworthiness.
The conference in London was not only about trust in Huawei but also on how we can trust technology in general. Paul Scanlan, CTO of Huawei Carrier Business Group, acknowledged that Huawei has a problem. According to Scanlan, Huawei is a great sales company (despite the geopolitical issues, 2019 will see a 20% increase in revenue), but not a great marketing company; Instead of talking to the media and governments, they spoke to engineers (Huawei spends $15 billion on R&D every year).
Therefore, Huawei brought together a diverse audience to discuss trust. The various speakers (both from Huawei as well as academics and experts) shared their insights on how to trust technology such as 5G (which is at the heart of the current debate around Huawei) and artificial intelligence. Let’s discuss the insights shared for both these two technologies.
5 Reasons to Trust 5G
5G is a core technology of Huawei and it will be an important driver for the 4th Industrial Revolution. 5G will have a significant impact on our society, which is why I mentioned the rise of, private, 5G networks in my latest annual technology trend prediction. However, there is also a lot of misunderstanding around 5G, with many people, incorrectly, believing that 5G is dangerous. To counter these incorrect, but persistent, beliefs, Paul Scanlan shared five reasons why we can trust 5G technology:
- 5G does not create harmful electromagnetic fields. Even the WHO says that radiofrequency signals from base stations do not affect human health;
- 5G is more secure than 4G – or any other technology. 4G does not encrypt your identity very well, and you can easily spoof a base station. The technology is better encrypted (256-bit encryption instead of 128-bit encryption), has enhanced interconnection security and unified authentication. By design, 5G is more secure than 4G, or even Wi-Fi (which is inherently insecure as there are no checks and networks can easily be spoofed).
- 5G will spur economic growth by transforming industries. 5G will transform industries, and many companies are currently experimenting with the technology. The higher data speeds, lower latencies and increased network capacity will enable, among others, predictive maintenance applications, which will revolutionise organisations.
- 5G produces less carbon than 4G. While 5G will undoubtedly lead to more data being created, transmitting all that data will be far more energy-efficient than 4G, up to 90% more efficient.
- 5G increases the level of trust in digital systems. Due to its low latency, it becomes possible for remote workers to ensure accurate results in delicate situations. For example, an ultrasound operator can operate on a patient using a tactile feedback system while being on the other side of the world. Thanks to low latency, the tactile feedback system works in real-time, enabling remote health care solutions.
5G is coming and Andrew Williamson, VP of Huawei Market Insights, believes that by 2025 developed markets will have 90% 5G coverage, resulting in intelligent connectivity value chains across the globe. Consequently, 5G enables organisations across the globe to closely work together. To do so, they would need to be able to trust the technology and each other.
To trust 5G, stakeholders need to be educated and know the facts of 5G. Industry stakeholders need to collaborate to manage and mitigate risks and governments need to develop regulatory policies that incentivise the implementation of use cases. Within this playfield, Huawei wants to be inclusive, collaborate with industry payers and be part of a global supply chain.
Building Trust and Transparency in AI
The other elephant in the room that was discussed is artificial intelligence and how we can trust AI. The examples of AI gone rogue are plenty, and with AI becoming more advanced, trust becomes key. To create trustworthy human-machine interactions, we need to have transparency (i.e. prevent black boxes) and have a clear understanding of the true capabilities of AI.
According to Phil Baty, Chief Knowledge Officer at Times Higher Education, AI is an existential threat that we have to worry about, a lot. However, Baty told the audience that innovation should not be a zero-sum game and the key to trust AI is a collaboration between academics and tech companies. That is why Huawei values the partnership with top universities from around the world.
Such collaboration will connect the fundamental AI research of academics with applied innovations and research of tech companies. Universities are truth seekers, and the conscious of society and close collaboration with tech companies will give universities the resources to continue to pursue that while giving tech companies also a conscious. Instead of the ‘move fast and break things’ culture of companies such as Facebook, Baty argues that the culture of universities should be “move slow and fix things”. Operating at this intersection can help in ensuring trust in AI.
Final Thoughts
We indeed have a trust issue, but trust is not yet completely lost. There are plenty of initiatives by universities and organisations around the world to restore this trust. For example, the Center for Human Technology addresses the harmful extractive attention economy and promotes building humane technology that aligns with humanity.
With Imagjn, we are also working on restoring trust. By building a decentralised reputation protocol for the internet, we aim to reduce the impact of fake news and bring accountability to the web (while allowing internet users to remain anonymous if wanted).
Finally, initiatives such as the Trust in Tech symposium organised by Huawei also help. Having lively discussions on how we can ensure confidence in a digital world contributes to restoring that trust. After all, understanding each other is the first step in creating trust among individuals, organisations and governments. It seems all is not lost, but a lot of work is yet to be done.
Image: FR Design/Shutterstock