Coping with AI anxiety?
A handful of suggestions to make adapting to the Age with AI slightly less disconcerting.
A handful of suggestions to make adapting to the Age with AI slightly less disconcerting. Please share your ideas in the comments.
Nearly every UK broadsheet newspaper covers AI and its impact on society this weekend.
"AI isn't falling into the wrong hands. It's being built by them" - The Independent.
"A race it might be impossible to stop: How worried should we be about AI?" - The Guardian.
"Threat from Artificial Intelligence more urgent than climate change" - The Telegraph.
"Time is running out: six ways to contain AI" - The Times.
Real humans wrote all these pieces, although AI may have helped author them. Undoubtedly the process of creating their copy and getting onto the screens and into the hands of their readers will have used AI somewhere.
Like articles about AI, avoiding AI Moments is almost impossible.
Most of these articles are gloomy predictions of the future, prompted by the resignation of Geoffrey Hinton from Google, who was concerned about the race between AI tech firms without regulation or public debate.
Indeed, these journalists ask if the people building AI have concerns and, quite often, need to figure out how their systems work, then everyone else should be worried as well.
A few point to the recent open letter calling for a six-month research pause on AI. The authors of this open letter believe that governments and society can agree in 6 months on how to proceed safely with AI development. The evidence from the last decade does not support that belief.
These are not new concerns for many of us or those that read my occasional posts here.
None of the articles references the similar 2015 letter that gained far more comprehensive support led by The Future of Life Institute, "Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter" (Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter - Future of Life Institute) signed by many of the same signatories as this year and with a similar version of requests, only eight years earlier.
Or the one in 2017, "Autonomous Weapons Open Letter", again signed by over 34,000 experts and technologists. (Autonomous Weapons Open Letter: AI & Robotics Researchers - Future of Life Institute)
Technologists have been asking for guidance, conversation, engagement, and even regulation, for over ten years in the field of AI.
We have also publicly and privately worried that the situation is the same as in 2007, where technologists will replace bankers as the cause of all troubles.
Although in this case, most technologists have warned that a crash is coming.
In 2015, I did a series of technology conversations with the military for the then-CGS around planning for the future. These talks, presentations and fireside chats were to prompt the need to prepare for AI by 2025, especially with command and control systems due to enter service in 2018.
A key aspect was building the platform to exploit and plan how AI will change military operations.
Yet the response was negative.
"Scientific research clearly shows that AI will not be functional in any meaningful way before 2025" was one particularly memorable response from a lead scientist.
Others pointed to the lack of funding for AI in the defence capability plans as a clear indicator that they did not need to worry about it.
They were not alone in ignoring automation. Our militaries, politicians, and broader society have been worried by more significant concerns and issues than ones created by computer programs, bits of software, and code that dreams of electronic cats.
One significant advantage of this new age of AI anxiety is that people are now willing and eager to talk and learn about AI. We have finally got an active conversation with the people who will be most affected by AI.
So how do we use this opportunity wisely?
People are scared of something they do not understand. Everyone should grow their understanding of AI, how it works, what it can and shouldn't do.
Here are a few suggestions to help prepare, with light tips to prompt debate and provoke challenges aimed at people reading headlines and wanting to know more rather than experts and AI developers.
First, I suggest three books to understand where we are today, the future, and where we should be worried.
Books
Life 3.0 Being Human in the Age of Artificial Intelligence - Max Tegmark, 2017. The author is the President of the Future of Life Institute and behind many of the open letters referenced above. Not surprisingly, Max takes a bold view of humanity's future. Some of the ideas proposed are radical, such as viewing life as a waveform transferable from carbon (humans) to silicon (machines). However, Life 3.0 is the ideal start for understanding the many challenges and tremendous opportunities presented in an age with AI.
AI Superpowers: China, Silicon Valley, and the New World Order - Kai-Fu Lee, 2018. A renowned expert in AI examines the global competition between the United States and China in AI development. Lee discusses the impact of AI on society, jobs, and the global economy with insights into navigating the AI era.
21 Lessons for the 21st Century - Yuval Noah Harari, 2018. A broader view of the challenges and trends facing us all this century. Harari is a master storyteller; even if you disagree with his perspective, you cannot fault his provocations. For instance, he asks if AI should protect human lives or jobs. Letting humans drive vehicles is statistically worse for humans when humans influenced by alcohol or drugs cause 30% of road deaths and 20% from distracted human drivers.
Film
Three broad films prompt consideration of AI in society. I wondered if films would be appropriate suggestions, but each takes an aspect of AI and considers how humans interact:
Ex Machina - Dir. Alex Garland, 2014. Deliberately thought-provoking thriller that explores AI, consciousness, and ethical implications of creating sentient beings. The film shows the default Hollywood image of "AI and robots" as attractive, super intelligent, wise androids. If you have seen it before, consider the view that all the main characters are artificial creations rather than humans.
Her - Dir. Spike Jonze, 2013. A poignant film about humanity, love, relationships, and human connection in a world with AI. The AI in "Her" is more realistic, where a functional AI adapts to each individual to create a unique interaction yet remains a generic algorithm.
Lo and Behold: Reveries of the Connected World - Dir. Werner Herzog, 2016. A documentary that explores the evolution of the internet, the essential precursor to an age with AI, and how marvels and darkness now fill our connected world. Herzog, in his unique style, also explores whether AI could create a documentary as well as himself.
Websites
Three websites that will help you explore AI concepts, tools, and approaches:
Partnership on AI (https://www.partnershiponai.org/) The Partnership on AI is a collaboration among leading technology companies, research institutions, and civil society organizations to address AI's global challenges and opportunities. Their website features a wealth of resources, including research, reports, and news on AI's impact on society, ethics, safety, and policy.
AI Ethics Lab (https://aiethicslab.com/) The AI Ethics Lab is an organization dedicated to integrating ethical considerations into AI research and development. Their website offers various resources, including articles, case studies, and workshops that help researchers, practitioners, and organizations to understand and apply ethical principles in AI projects.
The Alan Turing Institute (https://www.turing.ac.uk/) The Alan Turing Institute is the UK's national institute for data science and artificial intelligence. The website features many resources, including research papers, articles, and news on AI, data science, and their ethical implications.
Experiments
Hands-on experiments with AI and learning the basics for AI building blocks that require a little bit of coding awareness but are often explained and clearly demonstrated:
Google AI (https://ai.google/) is the company's research hub for artificial intelligence and machine learning. The website features a wealth of information on Google's AI research, projects, and tools. While the focus is primarily on the technical aspects of AI, you can also find resources on AI ethics, fairness, and responsible AI development.
OpenAI (https://www.openai.com/) OpenAI is a leading research organization focused on developing safe and beneficial AI. Their website offers various resources, including research papers, blog posts, and news on AI developments.
TensorFlow (https://www.tensorflow.org/) TensorFlow is an open-source machine learning library developed by Google Brain. The website offers comprehensive documentation, tutorials, and guides for beginners and experienced developers.
These are just introductions and ideas, not anything like an entire course of education or meant to cover more than getting a conversation started.
It also struck me making these lists that many of the texts and media are over five years old. It's likely indicative that media needs time to become relevant and that more recent items, especially those predicting futures, need time to prove their worth.
I'd be fascinated to read your suggestions for media that help everyone become more comfortable in the Age With AI.
Leaders need AI but does AI need Leaders?
Leaders need AI but does AI need leaders?
A lot of the discussion about AI focuses on how it affects us as individuals in the present. We tend to use a personal and immediate perspective, and often apply a simple criterion: if AI is not impacting me now, then it is nothing to worry about.
We also tend to have a uniform view of war as always being the same and universal, a bloody, violent mess resolved by brave people fighting in the mud. We think that technology can only make war faster, more brutal, and more violent.
Historians argue that warfare was shaped by how we learned to communicate 100,000 years ago and that writing 6,000 years ago made it worse. We have heard military leaders claim that the same values that helped them advance in their careers are the ones we will need in the future.
We seem determined to convince ourselves that warfare will not change.
Yet society is changing, rapidly, and there are three factors that leaders should consider when thinking about whether AI needs leaders:
We need AI to compete and win
We cannot process the amount of data, generate valuable insights, or operate at the speed we need to succeed without AI support for our military services. This is a fact based on similar experiences in other sectors. However, current approaches are planning to add a bit of AI here and there without much careful thought or thorough evaluation. I have written elsewhere about the piecemeal approach to AI. I would also add that industry does not always help by tempting users to look at amazing new tools to buy.
We need AI to win and apply that AI across large military operations areas.
AI is transforming every sector and industry with more horizontal and streamlined organisational structures. AI enables more distributed and collaborative decision-making, faster and easier sharing, and higher potential of individuals and teams. Teams can work more efficiently and quickly with the help of AI and do not require the same amount of managerial oversight and feedback.
Just as robots replaced workers, supervisors, inspectors, managers, and other middle-level roles in manufacturing lines, AI will do the same for organisations that rely on information, data, and insights.
The middle managers are the most vulnerable to AI disruption.
Moreover, future leaders will have very different career paths from our current leaders.
Military leaders tend to follow the footsteps of their predecessors. They are advised to learn from this staff role, grow in this command position, and operate in this context.
One day, they will be promoted, as long as they stick to the way.
However, we see that automation is changing, replacing, reshaping, and limiting those paths. We cannot expect future leaders to cope with very different structures, challenges, and ways of solving complexities without acknowledging these crucial changes. We need to rethink that path and make the adjustments today.
Unless we are in an organisation that is arrogant, slow, resistant to change, reliant on technology to do the same things faster rather than differently, and dependent on hierarchical command and control, and plans for change over years rather than months.
But for the leaders who stay in those organisations, where do they come from, and how do they develop? Successful organisations must design, select, train, and change to offer future leaders valuable opportunities that challenge and enhance their leadership skills, value human creativity, and reward their efforts.
Since 2015, there has been a series of predictions that would show AI is fundamentally changing the world. I have compiled the key steps separately, as only some things can be summarised with a few bullets. Our world continues to progress through those predictions, for good or bad, the main ones being:
Demonstrate that AI can achieve better than human skills like translate and image recognition
Build the infrastructure that would enable global scale and capacity with cloud storage and compute
Construct component building blocks that would enable the adoption of AI across sectors and industries
Democratise AI by making it easy to implement and access through packaged capabilities to generate content and output
Interface with AI using conversational and intuitive models that empower anyone with access
Replace mundane repeatable activities and tasks to enhance human ingenuity
Agree that AI is changing society and that it needs international collaboration on its introduction and control
Test AI to show that it acts in unpredictable and unintended natures
Create common principles and approaches to develop safe and trustworthy AI
Transition human activity in key sectors with AI alternatives that reduce costs and increase AI development
Adopt AI in high-risk areas like security, justice, and defence to improve performance and reduce (own side) military casualties
Use AI to develop and scale future AI performance and adoption within and across roles, functions, and sectors
Support humanity as they transition from work that involves mundane, repeatable activities into more creative, insightful activities
Increase digital skills to anticipate and adapt to working alongside AI toolsets.
Develop international and national planning, funding and support for people who are no longer employed or employable
Anticipate highly automatable sectors to help those affected transition to employment elsewhere
Encourage that the right mindset about AI is more important for a safe long-term transition than understanding only the technical toolset
Plan for a society that enhances human ingenuity with AI that empowers human life with value and worth
In all these cases, we have taken the easy path, taking the parts that reduce costs or deliver immediate gains, ignoring the more complex elements like international agreement, and are yet to consider the consequences in a meaningful, planned, and funded way.
We've stripped out the easy, taken the quick gains, and left future generations to pick up the bill.
This list shows how we are transforming society with the revolution of AI. This revolution also demands a radical change in the aspects of warfare and the military. The change should originate from the militaries themselves, who can harness the advantages of AI, but it will likely come from external sources, such as their new recruits or their enemies.
As an evangelist, I believe that military leaders today have a duty to prepare their command and their successors for an automated future. This is not about accepting the common view that AI will not alter the nature of warfare or that warfare is always the same.
It requires a deeper reflection on how your command could be affected and acting on those opportunities.
Sometimes, spreading this warning feels like Niels Bohr's publication on quantum atomic theory, as it is difficult for society to imagine the inevitable outcomes. Yet the world had thirty years before Oppenheimer applied those theories and started discussing destroying worlds with atomic bombs.
Today, unlike the atomic age, the research time for disruptive AI from theory to deployment is measured in months and not decades
When we ask whether AI needs leaders, we reach a key conclusion: in a world where automation has taken over simple and routine tasks, we still need leadership to tackle the most complex challenges. But how and where can our future leaders develop the skills to meet those demands?
Leadership is about preparing your teams for the future. And that is also where AI needs leadership.
What does Global Britain Stand and Fall for?
Our approach to national security must explain more: “what does the UK stand for?”
Today's Integrated Review rightly prioritises the changes required across Defence. It emphasises the importance of technology, especially around data and information, highlights the urgent need to transform, and explains how the future battlespace will differ from today.
It also details how "China's growing international stature is by far the most significant geopolitical factor in the world today. The fact that China is an authoritarian state with different values presents challenges for the UK and our allies. China will contribute more to global growth than any other country in the next decade with benefits to the global economy."
The Review highlights Russia's obvious threat, based on its previous aggression in Ukraine, as an "acute threat to our security. Until relations with its government improve, we will actively deter and defend against the full spectrum of threats emanating from Russia."
The Review also emphasises the need for a digital backbone to gain an information advantage in multi-domain operations over our adversaries. What this advantage looks like or how we will achieve it will become clearer shortly.
According to the Review, the future is obvious:
more technology,
an immediate threat from Russia, and
a global shift towards China.
A problem-solving and burden-sharing nation with a global perspective
Most importantly, the Review describes a strategic intent that defines what the UK stands for and the strategic goals we want to achieve. It champions the UK as a global power stretching from Portsmouth to the Pacific. It also stresses the importance of narrative and utilising information for success.
A successful strategy for a global power cannot define itself purely by reacting to events beyond its control or responding to another's strategic activities. It cannot be reactive to Chinese growth or merely respond to Russian aggression after it happens. The UK cannot hope to benefit from events that it does not anticipate.
Our approach must explain more: "What does the UK stand for?"
Some people may take this strategic vision for granted. Their argument that the logic of Palmerston still applies, "Our interests are eternal and perpetual, and those interests it is our duty to follow."
It may also seem old fashioned to highlight beliefs like democracy, free speech, universal education, open trade, universal rights, or fairness, as the Prime Minister does in the Review.
Alternatively, it may seem too modern to champion access to information, a free internet without censorship, diversity and inclusion, climate action, or tolerance of different views.
All these beliefs have faced recent domestic challenges inside the UK and US. Their evident and apparent inclusions are strong statements even if they are unlikely to gain the same newspaper headlines as killer robots, cyberwarriors, or foreign threats.
Pragmatists may want to keep all options open all the time using realpolitik, yet even realists require achievable goals.
Understanding what we stand for provides three strategic advantages:
Gains international initiative,
Confirms investment priorities, and
Drives our internal understanding.
It secures information advantage with a more explicit narrative that takes the initiative from our adversaries. A strategic vision creates clarity, consistency, and trust. Rather than respond to an adversary, our strategic narrative enables us to challenge our adversaries.
It creates the debate around defence and security priorities before conflict rather than the discussion occurring during conflict. The Review defines a direction of travel for UK Defence and Security and, over the next few weeks, more details will emerge on what that means for sailors, soldiers, and aircrew. The inevitable arguments around force cuts and changes are more straightforward with a clearer strategic vision and narrative.
Most crucially, a more explicit strategic stance clarifies what we will fall for and expect our troops to die for in the future. Our cause may be right, but it is far easier to justify the ultimate cost that our forces may make with it being clear.
There is much in the Review to praise and to champion. Our direction is now more precise, and our choices more apparent. It acknowledges that Defence is not isolated from the digital revolution transforming our broader society. It clarifies the threats we face and the approaches that adversaries use against us.
We now need to take this clear national strategic vision and use it to gain the international initiative, prioritise our investments, and deliver on our goals.
Being relevant in an increasingly competitive international environment
The Integrated Review of Security, Defence, Development and Foreign Policy changes everything. Now the Armed Forces must change and achieve the whole mission to seize the opportunities ahead.
The Integrated Review of Security, Defence, Development and Foreign Policy changes everything. Now the Armed Forces must change and achieve the whole mission to seize the opportunities ahead.
Today, the Defence Command Paper and Defence Industrial Strategies are published. In response, Defence needs to do more than just purchase exquisite equipment. Now is the time to change mindset and culture across every part of the organisation to make us relevant for the future.
Deloitte proposes three shifts to amplify the changes required:
Learning, not failing
Collaborate to compete
Curate cultural transformation
Last week's Integrated Review described a clear future vision within an increasingly competitive international environment. Global Britain will need to sustain advantages through science and technology, shape the international order, strengthen security, and build resilience.
For the first time in decades, UK military forces have a sharp purpose in a broader strategy.
Our purpose is more than defeat our enemies in total war, although as many retired officers will state this week, winning battles is still a core component of any global power. No longer is it the only component and, possibly, it is the least relevant within the Integrated Review's vision.
Conflict and Instability was one sub-section in the Integrated Review, and it is only one page of the total 112 pages. Yet, we will spend more time this week considering our ability and capabilities to complete that page than any other part of the review.
Any officer, serving or retired, will state that completing the mission is the key to military success. Mission Command shapes our Armed forces, and the whole body adopts a unifying purpose to deliver missions. We need to amplify that unifying mindset to now change our people and all enabling organisations within the MOD.
Our Armed Forces need to consider, describe, and deliver how they will be relevant across every part of the review, from championing technology power, defending human rights as a force for good, deter and disrupt threats, and supporting UK national resilience. CGS has published his views on how the Army will deliver against these goals: Future Soldier | The British Army (mod.uk)
Three immediate shifts for relevance
Deloitte has three immediate shifts to make our Armed Forces more relevant across the Integrated Review vision.
Learning, not failing. Organisations that learn-fast and adapt quickly perform better than those that fail-fast or fear failure. Deloitte encourages teams to build their deliveries around a desire to learn and demonstrate progress quickly. We adopt a shift in the scale of thinking. Rather than waiting until the entire programme fails before attempting to learn, we encourage rapid learning cycles with quick response times. Troops employ this mindset on operations, and we want to deliver its wider adoption across Defence.
Collaborate to Compete. Global Britain needs a united presence to export technologies in an increasingly competitive international environment and shape the open international order. The UK success in developing and adopting COVID-19 vaccines, which Deloitte supported, shows a path to follow. UK Government demonstrated how it could create markets, enable alternatives, and rapidly exploit success within existing regulations.
MOD competitions often reduce total contribution, with single tenders awarded after prolonged and gruelling selections, leaving the MOD entirely reliant upon the sole survivor. We need to change the approach that increases our international competitiveness and exploits innovation as new technologies from different suppliers appear down-stream. Deloitte sees technology shifts happening in 18-month cycles, and we need to adopt a similar pace within our procurement and equipment processes.
Curate cultural transformation. Leaders curate cultures and pass that culture on to the next generation. Today's leaders must use the Integrated Review to consider the culture and values that will make Defence relevant in the future for its people, its suppliers, and the international environment. Deloitte has helped organisations preserve cultural strengths that create uniqueness and competitive advantage and adopt new mindsets to become relevant. We also assist in embracing that culture across the entire organisation. The vision from the Integrated Review is bold, challenging, and disruptive. Defence needs to prepare, support, and encourage its people to embrace rather than fear that future.
Together, these three shifts increase our ability to adapt, adopt the capabilities to compete, and be relevant in a competitive future.
We support Global Britain's bold vision
We support Global Britain's bold vision and exciting future shown in the Integrated Review and assist MOD across its enterprise as it adjusts and changes as we have done for the 175 years of our existence. Like the Prime Minister and CGS, we are incredibly optimistic about the UK's place in the world and our ability to seize the opportunities ahead.
Ethical Principles for Artificial Intelligence from Microsoft
It all begins with an idea.
To realise the full benefits of AI, we’ll need to work together to find answers to these questions and create systems that people trust. Ultimately, for AI to be trustworthy, we believe that it must be “human-centred” – designed in a way that augments human ingenuity and capabilities – and that its development and deployment must be guided by ethical principles that are deeply rooted in timeless values.
At Microsoft, we believe that six principles should provide the foundation for the development and deployment of AI-powered solutions that will put humans at the centre:
Fairness: When AI systems make decisions about medical treatment or employment, for example, they should make the same recommendations for everyone with similar symptoms or qualifications. To ensure fairness, we must understand how bias can affect AI systems.
Reliability: AI systems must be designed to operate within clear parameters and undergo rigorous testing to ensure that they respond safely to unanticipated situations and do not evolve in ways that are inconsistent with original expectations. People should play a critical role in making decisions about how and when AI systems are deployed.
Privacy and security: Like other cloud technologies, AI systems must comply with privacy laws that regulate data collection, use and storage, and ensure that personal information is used in accordance with privacy standards and protected from theft.
Inclusiveness: AI solutions must address a broad range of human needs and experiences through inclusive design practices that anticipate potential barriers in products or environments that can unintentionally exclude people.
Transparency: As AI increasingly impacts people’s lives, we must provide contextual information about how AI systems operate so that people understand how decisions are made and can more easily identify potential bias, errors and unintended outcomes.
Accountability: People who design and deploy AI systems must be accountable for how their systems operate. Accountability norms for AI should draw on the experience and practices of other areas, such as healthcare and privacy, and be observed both during system design and in an ongoing manner as systems operate in the world.