Tony Reeves Tony Reeves

Playing Pool with a Robot. Who wins when AI takes the shot?

Imagine you’re playing pool with a friend.

The rules are the same as usual, and you are playing against a good friend who is highly competitive. Much as you like your friend, they really like winning and will seldom let you forget if you lose.

Winning is, therefore, a big thing for both of you.

So, to help you win, both of you agreed to have an AI robot helper. This robot AI can make shots on the table with its clever robot arm, play pool as well as you, learn every time it takes a shot to get better and watch you and your friend take their shots. The only restriction is that you can use the AI every other shot and not all the time.

Both you and your friend alternate turns to take shots, first you, then them, then your robot AI, then their robot AI, before returning to you. After a while, you realise that your robot AI is particularly good at getting out of tricky situations and can play shots you usually miss when stuck behind another ball.

Now, towards the end of the game, it is very close. Your robot AI player has got you out of several tricky situations, and you’ve been able to pot most of your target balls. Your friend has your cue ball stuck in a corner, but if you can make the shot, you will win the game. But if you miss it, your friend will very likely clean up the table. Already, they are looking unbelievably smug at how they have snookered you. Surely you cannot let them win and face further smugness?

It’s your robot AI’s turn to take the shot and possibly win the game. Do you let them take it?

Of course, many of us will read this story and believe, quite honestly, that they would stride forward, push the robot to one side, play the shot of their lifetime and win the game. Hurrah! We like to see ourselves as winners, but letting a robot win on our behalf doesn’t really feel like winning.

Yet, facing your friend’s smug face, many of us would let the robot AI take the shot. We would tell ourselves that it’s the robot’s turn and that we’ve played as a team the whole game. If it were the other way around, of course, we would take the shot, but just this once, it looks like the robot AI will win, and that’s precisely within the rules of the game.

We would hand over to the robot AI exactly how game theory would suggest we behave.

The Game Situation

Now, let’s change the narrative slightly. In this game, your friend has let their AI play every shot. They are so smug and confident that they believe their AI can beat you without your friend taking a single turn. So far, you have stayed in the game and now find yourself in the same situation. You are in a tricky position; you must make the shot to win the game, and it’s your robot AI’s turn to play the shot. Your friend and even their robot AI are now looking at you smugly. Would you still stride forward and take the shot, or let your robot AI, with a slightly higher probability of success, take the shot for you?

In this scenario, when playing against a robot AI on behalf of your friend, most people will let your robot AI take the shot and cheer loudly as the ball rolls into the pocket, and you win the game. You would feel no shame in letting your robot AI take its turn as, after all, your friend’s robot AI has taken every turn for them. Take that, smug friend and their even smugger robot AI!

These situations are precisely where we find ourselves in the adoption and deployment of AI. For the last decade, there have been well-intentioned and deeply considered reports, studies, conferences, and agreements on AI’s ethical and safe adoption. Agreements, statements, and manifestos have been written warning of the dangers of using AI for military, police, and health scenarios. Studies on the future of work, or lack of it, abound as we face an employment market driven by AI.

We would all agree that any AI that harms, damages, restricts or prevents human ingenuity or freedom would be a step in the wrong direction. We are all on the side of ethicists who say that death at the hands of a killer robot should be banned. We would all want to ensure that future generations can work with dignity and respect for at least a liveable wage.

We would agree until competition enters our lives.

Facing the AI Dilemma

Our brave adoption of ethical AI principles may face a more substantial test of winning or losing. At that point, would we remain ethical if losing looks likely? Would our ethical principles remain if faced with defeat by an enemy using AI to win? Applying game theory to these scenarios indicates a bleak future.

We all know that collaboration and working together is often better for us all. It is the basis of modern civilisation. We may also know that tit-for-tat with occasional forgiveness may be a better-winning strategy than cooperation, but only if we can trust the other person to vary their pattern. Suppose the other player always goes for the negative but advantageous choice. In that case, any competitor will likely follow the same harmful path.

We can see where this approach is heading if the favourable option is partial AI adoption and the negative is complete AI replacement of a service or function with AI. For instance, an organisation or company that fully adopts AI in their call centres will seize a competitive advantage over others who retain human operators. Their competitors may argue that retaining employees is the right thing to do, that customers prefer real human interaction even if they struggle to tell the actual difference, or that there is a brand advantage to retaining humans in their service.

Ultimately, though, the AI service will prove significantly cheaper and, based on current deployments, will provide at least as good a service as human call centres for most scenarios. Customers of recent AI call centre deployments show that they cannot tell the difference between humans and AI. Faced with such a choice, will a competitor keep the equivalent of playing their own pool shots or hand over part of their service to a robot AI?

The immediate impact is faster service as the wait time for an operator disappears and a significant reduction in operating costs. The business may use these savings to create better services and products, invest in reskilling call centre staff, or use them as profit and reward.

In the longer term, the impact of this simple game decision, scaling across call centre companies, would be much bleaker.

When 4% of the UK workforce, around 1.3m people, work in call centres, we can begin to feel the impact of such a decision. In the region of Newcastle, North East England, there are 178,000 call centre employees, primarily female. These workers are often the only income earners for their families after the manufacturing sector in the area collapsed due to cheaper foreign alternatives and manufacturing automation.

A few companies may avoid this movement and value their human employees more than their profits, revenue, or shareholder return. They may ensure alternative employment is found for their workers or provide reskilling initiatives. The reality is that most will be unemployed, with few transferable skills, and in a region with the highest unemployment rate in the UK.

The AI Adoption Conundrum

When faced with a snooker game where one company adopts AI, the competitors will quickly follow suit. Unlike previous automation waves, like in car manufacturing, which took place over several years, this wave will be quick. Service sector work, especially work centred on information and data, can be quickly automated in months rather than years.

That timescale does not allow the creation of alternative jobs, new skills to be learned, or employment opportunities. It is at a pace that most people would struggle to comprehend or manage, let alone emerge from with better prospects.

The impact rapidly expands beyond the individuals struggling to find employment. The first impact would be a surge in benefit claims, increasing government expenditures. At the same time, employees’ income tax payments would vanish, and their employer’s national insurance payments would cease. Costs would rise, and tax income would decrease. Local expenditures would diminish, impacting other businesses and employment.

This pattern is seen in the Northeast and similar cities like Detroit, US. In the 1980s, heavy industry and manufacturing collapsed. In 1986, as an example, Newcastle was reeling after the Royal Ordnance factory closed (400 jobs), two coal mines closed (2000 jobs), shipyard closures (3000 jobs), British Steel mills (800 jobs), NEI Parsons (700) and Churchills (400). Over 7000 unemployed in 12 months, and these were just the large employers. Countless small businesses also closed at the same time.

The consequences of these closures in the Northeast were decades of stagnation until service industries like call centres eventually moved into the region and employment picked up once more.

Now, we stand at a similar crossroads. Still, this time, the threat of AI-induced job displacement looms over the very service sectors that once revitalised communities. The allure of efficiency and cost-cutting is undeniable. Still, the human cost could be far steeper than any short-term gain. Just as in the pool game, where we were tempted to hand control over to a robot AI, in the real world, businesses and individuals may find themselves willing to let AI take over vital tasks for the sake of winning in a competitive market.

However, unlike in the game, the stakes here are much higher. While letting the AI play might win you a single match, relying too heavily on AI in society could unravel the social fabric of entire regions. The ripple effects of AI replacing human workers are not confined to immediate job losses; they extend to the erosion of livelihoods, communities, and human dignity.

The lesson from our robot AIpool game is clear: the allure of short-term victory should not close our eyes to the long-term consequences. Winning at any cost, whether in a friendly match or business, often leads to outcomes that benefit only a few while leaving many behind. We must approach AI adoption not merely through the lens of efficiency but with a deep sense of responsibility toward our workforce and broader society. In the game of life, true victory lies not in replacing humans with AI but in finding a balance that empowers both to thrive.

Read More
Tony Reeves Tony Reeves

Ensuring the Safety of AI: The Key to Unlocking Autonomous Systems' Potential

As artificial intelligence (AI) revolutionises industries from healthcare to transport, one critical factor holds back widespread adoption: assurance. Defence Science and Technology Laboratory (Dstl) has released a comprehensive guide, "Assurance of Artificial Intelligence and Autonomous Systems," exploring the steps necessary to ensure AI systems are safe, reliable, and trustworthy.

The Biscuit Book underscores the need for assurance as a structured process that provides confidence in the performance and safety of AI and autonomous systems. Without it, we risk deploying technology either prematurely when it remains unsafe, or too late, missing valuable opportunities.

Why Assurance Matters

AI and autonomous systems increasingly tackle complex tasks, from medical diagnostics to self-driving cars. However, these systems often operate in unpredictable environments, making their behaviour difficult to guarantee. Assurance provides the evidence needed to instil confidence that these systems can function as expected, especially in unforeseen circumstances.

Dstl defines assurance as the collection and analysis of data to demonstrate a system's reliability. This includes verifying that AI algorithms can handle unexpected scenarios and ensuring autonomous systems behave safely.

This is part of a series summarising guidance and policy around the safe procurement and adoption of AI for military purposes. This article looks at the DSTL Biscuit Books around AI, available here: Assurance of AI and Autonomous Systems: a Dstl biscuit book - GOV.UK (www.gov.uk)

Midjourney v6.1 prompt: Ensuring the Safety of AI: The Key to Unlocking Autonomous Systems' Potential

As artificial intelligence (AI) revolutionises industries from healthcare to transport, one critical factor holds back widespread adoption: assurance. Defence Science and Technology Laboratory (Dstl) has released a comprehensive guide, "Assurance of Artificial Intelligence and Autonomous Systems," exploring the steps necessary to ensure AI systems are safe, reliable, and trustworthy.

The Biscuit Book underscores the need for assurance as a structured process that provides confidence in the performance and safety of AI and autonomous systems. Without it, we risk deploying technology either prematurely when it remains unsafe, or too late, missing valuable opportunities.

Why Assurance Matters

AI and autonomous systems increasingly tackle complex tasks, from medical diagnostics to self-driving cars. However, these systems often operate in unpredictable environments, making their behaviour difficult to guarantee. Assurance provides the evidence needed to instil confidence that these systems can function as expected, especially in unforeseen circumstances.

Dstl defines assurance as the collection and analysis of data to demonstrate a system's reliability. This includes verifying that AI algorithms can handle unexpected scenarios and ensuring autonomous systems behave safely.

Navigating Legal and Ethical Challenges

AI introduces new legal and ethical dilemmas, particularly around accountability when things go wrong. The report highlights the difficulty in tracing responsibility for failures when human operators oversee systems but don't control every decision. Consequently, legal frameworks must evolve alongside AI technologies to address issues like data privacy, fairness, and transparency.

Ethical principles such as avoiding harm, ensuring justice, and maintaining transparency are essential in developing AI systems. However, implementing these values in real-world scenarios remains a significant challenge.

From Algorithms to Hardware: A Complex Web of Assurance

The guide covers multiple areas where assurance is necessary:

  • Data: Ensuring training data is accurate, unbiased, and relevant is critical, as poor data can lead to unreliable systems.

  • Algorithms: Rigorous testing and validation of AI algorithms are essential to ensure they perform correctly in all situations.

  • Hardware: AI systems must rely on computing hardware that is secure and operates as expected under all conditions.

Ensuring all these components work seamlessly together is complex, which is one reason we don't yet see fully autonomous cars on the roads.

The Ever-Present Threat of Adversaries

As AI systems become more integrated into society, they become attractive targets for adversaries, including cybercriminals and rogue states. Small changes in data or deliberate attacks on system inputs can cause catastrophic failures. To mitigate these risks, Dstl advocates for rigorous security testing and using trusted data sources.

A Costly but Necessary Process

Assurance comes at a price, but it's necessary to avoid costly failures or missed opportunities. The Dstl Biscuit Book emphasises that the level of assurance required depends on the potential risks involved. For example, systems used in high-risk environments, such as aviation, require far more rigorous testing and validation than lower-risk systems.

Ultimately, assurance isn't a one-time activity. As AI systems evolve and adapt to new environments, ongoing testing and validation are needed to maintain safety and trust.

Looking Ahead

The Dstl Biscuit Book remains a highly relevant reminder of the challenges in ensuring AI systems are safe and reliable. While AI holds incredible potential to transform industries and improve lives, the journey to fully autonomous systems requires a careful balance of technical expertise, ethical responsibility, and robust assurance frameworks.

For now, it's clear that unlocking the full potential of AI and autonomous systems hinges on our ability to assure their safety at every step.

Read More
Tony Reeves Tony Reeves

Council of Europe Adopts Groundbreaking Framework on Artificial Intelligence and Human Rights, Democracy and the Rule of Law

On September 5, 2024, the Council of Europe introduced a landmark legal framework, CETS 225, also known as the Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy, and the Rule of Law. This Convention sets ambitious goals to align artificial intelligence (AI) systems with fundamental human rights, democratic principles, and the rule of law, offering guidelines to address the opportunities and risks posed by AI technologies.

This is a landmark as the first-ever international legally binding treaty to ensure that the use of AI systems is fully consistent with human rights, democracy, and the rule of law. It was signed by the UK, the US, the EU, Andorra, Georgia, Iceland, Norway, the Republic of Moldova, San Marino, and Israel.

Part of a series summarising AI policy and guidance. This article examines the Council of Europe Framework on AI and Human Rights, Democracy and the Rule of Law that can be found here: CETS_225_EN.docx.pdf (coe.int)

On September 5, 2024, the Council of Europe introduced a landmark legal framework, CETS 225, also known as the Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy, and the Rule of Law. This Convention sets ambitious goals to align artificial intelligence (AI) systems with fundamental human rights, democratic principles, and the rule of law, offering guidelines to address the opportunities and risks posed by AI technologies.

This is a landmark as the first-ever international legally binding treaty to ensure that the use of AI systems is fully consistent with human rights, democracy, and the rule of law. It was signed by the UK, the US, the EU, Andorra, Georgia, Iceland, Norway, the Republic of Moldova, San Marino, and Israel.

A Balanced Approach to AI

The Convention recognizes AI's dual nature: while AI systems can promote innovation, economic development, and societal well-being, they also carry significant risks to individual rights and democratic processes if left unchecked. The preamble acknowledges AI's potential to foster human prosperity but highlights concerns over privacy, autonomy, discrimination, and the misuse of AI systems for surveillance or censorship.

Scope and Purpose

CETS 225's primary goal is to create an international legal framework governing the entire lifecycle of AI systems—from design and development to deployment and eventual decommissioning. The Convention's scope covers public authorities and private entities involved in AI development, requiring compliance with principles that protect human rights and uphold democratic values.

Key Provisions

  1. Protection of Human Rights: Signatories must ensure AI systems comply with human rights obligations set out by international and domestic law, including safeguarding privacy, preventing discrimination, and ensuring accountability for adverse impacts.

  2. Democratic Integrity: The Convention mandates measures to prevent AI systems from undermining democratic processes, such as manipulating public debate or unfairly influencing elections.

  3. Transparency and Accountability: Signatories must implement mechanisms ensuring transparency in AI decision-making processes, providing oversight and documentation to allow individuals to understand and challenge AI-driven decisions affecting them.

  4. Non-discrimination: A key focus is ensuring AI systems respect equality, particularly regarding gender and vulnerable populations. The Convention mandates measures to combat discrimination and promote fairness in AI outputs.

  5. Risk and Impact Management: The Convention outlines a robust risk management framework for AI systems, including assessing potential impacts on human rights and democracy, applying safeguards, and mitigating risks through ongoing monitoring.


Remedies and Safeguards

CETS 225 establishes the right to accessible and effective remedies for individuals whose rights are affected by AI systems. It requires documentation of AI systems that could significantly impact human rights and mandates that relevant information be made available to those affected. The framework also emphasizes procedural safeguards, ensuring individuals interacting with AI systems know their rights.

International Cooperation and Oversight

The Convention promotes global cooperation, encouraging signatories and non-member states to align their AI governance with its principles. The Conference of the Parties will oversee compliance, provide a platform for resolving disputes, and facilitate the exchange of best practices and legal developments.

A Milestone in AI Governance

CETS 225 represents a significant step towards regulating AI use in a manner that prioritizes ethical considerations and fundamental rights protection. It acknowledges AI's profound societal impact while aiming to ensure its development and application remain aligned with democratic values. The Convention is a model for international cooperation in addressing AI's unique challenges, fostering a future where technology and human rights coexist harmoniously.

As the world grapples with AI's implications, this framework sets a precedent for responsible AI governance on a global scale, balancing innovation with the need to protect individual freedoms and democratic institutions.

Read More
Tony Reeves Tony Reeves

The Ethical Landscape of AI in National Security: Insights from GCHQ Pioneering a New National Security Model.

In the rapidly evolving field of artificial intelligence (AI), ethical considerations are paramount, especially regarding national security. GCHQ, the UK's intelligence, cyber, and security agency, has taken significant steps to ensure that their use of AI aligns with ethical standards and respects fundamental rights.

This article is part of an impartial series summarising guidance and policy around the safe procurement and adoption of AI for military purposes. This summary looks at GCHQ's published guidance that is available here: GCHQ | Pioneering a New National Security: The Ethics of Artificial Intelligence

Midjourney v6.1 prompt: In the rapidly evolving field of artificial intelligence (AI), ethical considerations are paramount, especially regarding national security.

In the rapidly evolving field of artificial intelligence (AI), ethical considerations are paramount, especially regarding national security. GCHQ, the UK's intelligence, cyber, and security agency, has taken significant steps to ensure that their use of AI aligns with ethical standards and respects fundamental rights.

Commitment to Ethical AI

The guidance emphasises GCHQ's commitment to balancing innovation with integrity. This commitment is evident through several initiatives to embed ethical practices within their operations. They recognise that the power of AI brings not only opportunities but also responsibilities. Here are the critical components of their commitment:

  • Strategic Partnerships: GCHQ collaborates with renowned institutions like the Alan Turing Institute to incorporate world-class research and expert insights into their AI practices. These partnerships ensure AI's latest advancements and ethical considerations inform their approach.

  • Ethics Counsellor: Established in 2014, the role of the Ethics Counsellor is central to GCHQ's ethical framework. This role involves guiding ethical dilemmas and ensuring that decisions are lawful and morally sound. The Ethics Counsellor helps navigate the complex landscape of modern technology and its implications.

  • Continuous Learning: GCHQ emphasises the importance of ongoing education and awareness. Providing training and resources on AI ethics to all staff members ensures that ethical considerations are deeply ingrained in GCHQ culture. This commitment to education helps maintain high ethical standards across the organisation.

Legislative Frameworks

Operating within a robust legal framework is fundamental to GCHQ's ethical AI practices. These frameworks provide the necessary guidelines to ensure their activities are lawful, transparent, and respectful of human rights. Here are some of the key legislations that govern their operations:

  • Intelligence Services Act 1994 defines GCHQ's core functions and establishes the legal basis for their activities. It ensures that operations are conducted within the bounds of the law.

  • Investigatory Powers Act 2016: This comprehensive legislation controls the use and oversight of investigatory powers. It includes safeguards to protect privacy and ensure that any intrusion is justified and proportionate. This act is central to ensuring that GCHQ's use of AI and data analytics adheres to strict legal standards.

  • Human Rights Act 1998: GCHQ is committed to upholding the fundamental rights enshrined in this act. It ensures that their operations respect individuals' rights to privacy and freedom from discrimination. This commitment to human rights is a cornerstone of their ethical framework.

  • Data Protection Act 2018 outlines the principles of data protection and ensures responsible handling of personal data. GCHQ's adherence to this legislation demonstrates its commitment to safeguarding individuals' privacy in AI operations.

Oversight and Transparency

Transparency and accountability are crucial for maintaining public trust in GCHQ's operations. Several independent bodies oversee their activities, ensuring they comply with legal and ethical standards. Here are the critical oversight mechanisms:

  • Intelligence and Security Committee (ISC): This parliamentary committee provides oversight and holds GCHQ accountable to Parliament. The ISC scrutinises operations to ensure they are conducted in a manner that respects democratic principles.

  • Investigatory Powers Commissioner's Office (IPCO): IPCO oversees the use of investigatory powers, ensuring they are used lawfully and ethically. Regular audits and inspections by IPCO provide an additional layer of accountability.

  • Investigatory Powers Tribunal (IPT): The IPT offers individuals a means of redress if they believe they have been subject to unlawful actions by GCHQ. This tribunal ensures a transparent and fair process for addressing grievances.

  • Information Commissioner's Office (ICO): The ICO ensures compliance with data protection laws and oversees how personal data is used and protected. This oversight is essential for maintaining public confidence in GCHQ's data practices.

Ethical Practices and Innovation

GCHQ's ethical practices are not just about adhering to the law; they involve making morally sound decisions that reflect their core values. Here's how they incorporate ethics into innovation:

  • AI Ethical Code of Practice: GCHQ has developed an AI Ethical Code of Practice based on best practices around data ethics. This code outlines the standards their software developers are expected to meet and provides guidance on achieving them. It ensures that ethical considerations are embedded in the development and deployment of AI systems.

  • World-Class Training and Education: Recognising the importance of a well-informed workforce, GCHQ invests in training and education on AI ethics. This includes specialist training for those involved in developing and securing AI systems. By fostering a deep understanding of ethical issues, they ensure their teams can make informed and responsible decisions.

  • Diverse and Inclusive Teams: GCHQ is committed to building teams that reflect the diversity of the UK. They believe a diverse workforce is better equipped to identify and address ethical issues. By fostering a culture of challenge and encouraging alternative perspectives, they enhance their ability to develop ethical and innovative solutions.

  • Reinforced AI Governance: GCHQ is reviewing and strengthening its internal governance processes to ensure they apply throughout the entire lifecycle of an AI system. This includes mechanisms for escalating the review of novel or challenging AI applications. Robust governance ensures that ethical considerations are continuously monitored and addressed.

The AI Ethical Code of Practice

One of the cornerstones of GCHQ's guidance to ethical AI is their AI Ethical Code of Practice. This framework ensures that AI development and deployment within the agency adhere to the highest ethical standards. Here's a deeper dive into the key elements of this code:

  • Principles-Based Approach: The AI Ethical Code of Practice is grounded in core ethical principles such as fairness, transparency, accountability, and empowerment. These principles are the foundation for all AI-related activities, guiding developers and users in making ethically sound decisions.

  • Documentation and Transparency: To foster transparency, the code requires meticulous documentation of AI systems, including their design, data sources, and decision-making processes. This documentation is crucial for auditing purposes and helps ensure accountability at every stage of the AI lifecycle.

  • Bias Mitigation Strategies: Recognising the risks of bias in AI, the code outlines specific strategies for identifying and mitigating biases. This includes regular audits of data sets, diverse team involvement in AI projects, and continuous monitoring of AI outputs to detect and correct discriminatory patterns.

  • Human Oversight: The code emphasises the importance of human oversight in AI operations. While AI can provide valuable insights and augment decision-making, final decisions must involve human judgment. This approach ensures that AI serves as a tool to empower human analysts rather than replace them.

  • Security and Privacy Safeguards: Given the sensitive nature of GCHQ's work, the code includes stringent security and privacy safeguards. These measures ensure that AI systems are developed and deployed in a manner that protects national security and individual privacy.

  • Continuous Improvement: The AI Ethical Code of Practice is a living document that evolves with technological advancements and emerging ethical considerations. GCHQ regularly reviews and updates the code to incorporate new best practices and address gaps identified through ongoing monitoring and feedback.

Conclusion

GCHQ's approach to ethical AI in national security expands its commitment to protecting the UK while upholding the highest standards of integrity. Its legislative frameworks, transparent oversight mechanisms, and ethical practices set a high standard for other organisations.

As it continues to respond to technological advancements, GCHQ balances security with respect for fundamental human rights.

This approach ensures that as it harnesses the power of AI, it does so responsibly and ethically to keep the UK safe and secure.

Read More
Tony Reeves Tony Reeves

Critical Steps to Protect Workers from Risks of Artificial Intelligence | The White House

In a significant move to safeguard workers from the potential risks posed by artificial intelligence, the White House has announced a series of critical steps designed to ensure ethical AI development and workplace usage. These measures emphasise worker empowerment, transparency, ethical growth, and robust governance frameworks.

This article is part of an impartial series summarising guidance and policy around the safe procurement and adoption of AI for military purposes. This piece looks at recent White House guidance on protecting workers from the risks of Artificial Intelligence available here: https://www.presidency.ucsb.edu/documents/fact-sheet-biden-harris-administration-unveils-critical-steps-protect-workers-from-risks

Midjourney 6.1 prompt Safeguard AI Workers

In a significant move to safeguard workers from the potential risks posed by artificial intelligence, the White House has announced a series of critical steps designed to ensure ethical AI development and workplace usage. These measures emphasise worker empowerment, transparency, ethical growth, and robust governance frameworks.

Critical Principles for AI in the Workplace

  1. Worker Empowerment:Workers should have a say in designing, developing, and using AI technologies in their workplaces. This inclusive approach ensures that AI systems align with the real needs and concerns of the workforce, particularly those from underserved communities.

  2. Ethical Development:AI technologies should protect workers' interests, including that AI systems do not infringe on workers' rights or compromise their safety.

  3. AI Governance and Oversight:Clear governance structures and human oversight mechanisms are essential. Organisations must have procedures to evaluate and monitor AI systems regularly to ensure they function as intended and do not cause harm.

  4. Transparency:Employers must be transparent about the use of AI in their operations. Workers and job seekers should be informed about how AI systems are utilised, ensuring no ambiguity or hidden agendas.

  5. Protection of Rights:AI systems must respect and uphold workers' rights, including health and safety regulations, wage and hour laws, and anti-discrimination protections. Any AI application that undermines these rights is unacceptable.

  6. AI as a Support Tool:AI should enhance job quality and support workers in their roles. The technology should assist and complement human workers rather than replace them, ensuring that it adds value to their work experience.

  7. Support During Transition:As AI changes job roles, employers are responsible for supporting their workers through these transitions. This includes providing opportunities for reskilling and upskilling to help workers adapt to new demands.

  8. Responsible Use of Data:Data collected by AI systems should be managed responsibly. The scope of data collection should be limited to what is necessary for legitimate business purposes, and data should be protected to prevent misuse.

A Framework for the Future

These principles are intended to be a guiding framework for businesses across all sectors. They must be considered throughout the entire AI lifecycle, from design and development to deployment, oversight, and auditing. While not all principles will apply equally in every industry, they provide a comprehensive foundation for responsible AI usage.

Conclusion

The US proactive approach to regulating AI in the workplace is a significant step towards ensuring that AI technologies are developed and used in ways that protect and empower workers. By setting these clear principles, the Administration aims to create an environment where AI can drive innovation and opportunity while safeguarding the rights and well-being of the workforce. Similar measures will be crucial as AI balances technological advancement and ethical responsibility.



Read More
Tony Reeves Tony Reeves

Dignity at work with the AI Revolution - TUC Union perspectives

The TUC Manifesto, "Dignity at Work and the AI Revolution", outlines fundamental values and proposals designed to safeguard worker rights, promote fairness, and ensure the responsible use of AI in employment settings.

Part of a series examining global AI policies and guidance. As artificial intelligence (AI) continues to reshape the workplace, the Trades Union Congress (TUC) has issued a manifesto to ensure that technological advancements benefit all workers that can be found here: https://www.tuc.org.uk/research-analysis/reports/dignity-work-and-ai-revolution

Midjourney 6.1 Prompt Dignity at Work

"Dignity at Work and the AI Revolution" outlines fundamental values and proposals designed to safeguard worker rights, promote fairness, and ensure the responsible use of AI in employment settings.

A Call for Responsible AI

AI is rapidly transforming the way businesses operate, driving productivity and innovation. However, the TUC warns that AI could entrench inequality, discrimination, and unhealthy work practices without proper oversight. The manifesto highlights the need to act now, ensuring that AI is deployed in ways that respect worker dignity and maintain fairness, transparency, and human agency.

Worker-Centric AI

The TUC outlines several core values to guide the implementation of AI in the workplace:

1. Worker Voice: Workers should be actively involved in decisions about AI, particularly in its application to critical functions like recruitment and redundancy. Consultation with unions and employees is essential to ensure fairness.

2. Equality: AI systems must not perpetuate bias or discrimination. The manifesto highlights the dangers of facial recognition, which can yield biased outcomes if trained on unrepresentative data. All workers should have equal access to AI tools, regardless of age, race, or disability.

3. Health and Wellbeing: New technologies must not compromise workers' physical or mental health. The manifesto stresses that any system introduced should enhance rather than diminish workplace safety and wellbeing.

4. Work/Home Boundaries: With the rise of remote work, exacerbated by the pandemic, growing concern about AI monitoring blurs the line between personal and professional life. The TUC calls for clear boundaries to prevent constant surveillance and ensure employees can disconnect from work.

5. Human Connection: AI should not replace the human element in decision-making. The manifesto emphasises preserving human involvement, especially regarding important workplace decisions.

6. Transparency and Explainability: Workers need to know when AI is being used and understand how decisions about them are made. Transparency is vital to building trust and ensuring that technology operates fairly.

7. Data Awareness and Control: Employees should have greater control over their personal data. AI systems must be transparent about how data is used and give workers a say in how their data is handled.

8. Collaboration: The TUC stresses that all stakeholders—workers, employers, unions, policymakers, and tech developers—must collaborate to ensure AI benefits everyone.

Turning Values into Action

The manifesto doesn’t just present a set of ideals; it outlines concrete proposals for how these values can be realised in practice:

1. Regulating High-Risk AI: The TUC proposes focusing regulatory efforts on high-risk AI systems that can potentially significantly impact workers' lives. Sector-specific guidance should be developed with input from unions and civil society to ensure fairness.

2. Collective Bargaining and Worker Consultation: Employers should consult with trade unions when deploying AI systems, particularly those deemed high-risk. Collective agreements should reflect the values of fairness, transparency, and worker involvement.

3. Anti-Discrimination Measures: The TUC calls for legal reforms to protect workers from AI discrimination. The UK's data protection laws should be amended to ensure that discriminatory data processing is always unlawful, and those responsible for discriminatory AI decisions should be held accountable.

4. The Right to Disconnect: The manifesto proposes a statutory right for workers to disconnect from work, ensuring that AI systems do not intrude on their personal time or create excessive stress due to constant surveillance.

5. Transparency Obligations: Employers should be required to maintain a register of AI systems used in the workplace, detailing how they are used and their impact. This register should be accessible to all workers and job applicants, ensuring transparency.

6. Human Review of AI Decisions: Workers should have the right to request human intervention and review when AI makes important decisions about them, particularly in high-stakes situations like performance reviews or redundancies.

Shaping the Future of AI at Work

The TUC’s manifesto is a timely call to action. As AI becomes an increasingly integral part of the workplace, ensuring that its deployment does not undermine worker rights or exacerbate inequality is vital. By promoting transparency, equality, and worker involvement, the TUC aims to ensure that AI serves all interests rather than the few. The document serves as both a roadmap for the ethical use of AI in employment and a warning about the potential risks of unchecked technological advancement.

As the TUC stresses, the time to act is now, before AI-driven decisions in the workplace become the norm. In the age of AI, the future of work must prioritise dignity, fairness, and human agency.

Read More
Tony Reeves Tony Reeves

The AI Election: How Fast Intelligence Threatens Democracy

Suppose politicians are serious about preventing AI from interfering with elections. In that case, they need to start with the source of misuse, as AI could have a far more damaging impact on democracy than social media or foreign powers like Russia.

Prompt: Politician campaigning for votes, large crowd - Midjourney v6

In 2024, the US and the UK will see two significant elections: the first for the US President and a General Election in the UK. Politicians and technologists are already concerned about AI's role in creating, targeting, and spreading misinformation and disinformation, but what can be done to keep democracy free?

If politicians are serious about preventing AI from interfering with elections, they need to start with the source of misuse, as AI could have a far more damaging impact on democracy than social media or foreign powers like Russia.

This is because AI is exceptionally good at creating believable narratives, whether actual or false and, in our age of Fast Intelligence, where an answer is just a voice prompt away, we seldom take the necessary time to check or verify convincing stories. We now regularly read stories where professionals incorrectly used AI to produce business reports, court documents, or news stories. These professionals should have checked the hallucinated story created by AI or, worse, they lacked sufficient knowledge to identify that their fast intelligence was false.

Examples include lawyers who presented court papers with fictitious case references, academics who submitted evidence to government investigations with false incidents, and politicians deliberately using Deepfake technology on themselves to gain publicity.

Our desire to generate and consume fast intelligence to save our time is leading to lazy acceptance of false information.

The current prevalence of Generative AI, using models and transformers to create textual and visual narratives that mimic human creativity, is particularly good at generating convincing narratives. Trained on the content of the World Wide Web and optimised with specific data, GenAI is the epitome of fast intelligence. It is also the acme of building trust.

We are sceptical about reading an unsourced internet page, especially if they were using that information for a weighty decision. It is human nature to mistrust the unknown. Equally, if we label something "Generated by a Computer" or "Written with AI, " people are more sceptical.

Yet if you do not need to label an advert as "AI generated", and we filter that same page through a GenAI transformer, make it sound convincing by adding specific phrases and facts relevant to the reader, target that information with exact language tuned towards an individual's preferences, ensure that it is distributed in a means that will attract that person's attention and then follow it up with further, similar, convincing stories. You have a compelling pattern to influence a decision. Repeat this constantly, every minute, every day, for every individual.

GenAI allows a genuinely individual and effective marketing campaign to be generated at meagre costs.

This is where fast intelligence far exceeds recent elections' excesses, corruption, or fakery. Governments were rightly investigated when personal information and data were used to distribute political messages, targeting specific groups and demographics to influence an election. This targeting, whilst more specific than previously experienced, was still quite broad and required both specialist skills and significant crafting to be effective. The individuals at the heart of such scandals were richly rewarded due to the uniqueness of their skill set, and they could influence groups rather than individuals.

No longer. Fast intelligence can now deliver optimised messages targeting individuals and, with the proper access to data, deliver those messages far more effectively than previously witnessed. It can deliver those messages at greater volume, faster pace, and significantly lower cost.

Anyone with an internet connection and willingness to experiment with GenAI can produce a cheaper, quicker, and more effective mass distribution of highly impactful information. This enables any politically minded individual to have the disruptive potential previously controlled by nation-states, prominent political parties, or global social media organisations.

This year, GenAI will generate previously unseen levels of misinformation and disinformation.

For a democracy, most fake cases will fall into the misinformation category, where information has been wrongly sourced, wrongly evidenced, or is just plain wrong. The intent may have been fair, but the facts used to prove the intent were false. Misinformation is also the most likely category that people will witness during next year's elections, as

GenAI creates misinformation because it is flawed and not perfect. We see regular cases of individuals trusting AI-generated material because it appears compelling and evidentially supported. A recent personal case occurred when I asked AI to write a 250-word response to a question. The answer was 311 words, but the AI insisted it was 250. Eventually, after a long pause, the AI admitted it was 311 and that it "will be better at counting words in the future".

If we use GenAI to generate election campaign materials, then, due to GenAI's flawed nature, we will see an increase in misinformation, where false facts are used to support a political narrative. Most politicians and political parties remain broadly honest in their public engagements with the electorate, and these cases of misinformation can be resolved honestly.

Disinformation, where false facts are distributed deliberately to influence or persuade a decision, is far more worrying. Disinformation used by politicians seeking to win at all costs or foreign states intending to sway a political outcome can be highly damaging. Disinformation can immediately influence a decision, perhaps swaying a crucial seat or electoral count.

Generating disinformation with GenAI is also increasingly easy, despite controls introduced into these tools. If you ask tools like Google Gemini or OpenAI ChatGPT to create a disinformation campaign plan, it will initially reply, "I'm sorry, but I can't assist with that request."

However, using a few simple workarounds, a malicious actor can create a campaign and then target individuals with personalised products, and this is without reverting to creating their own GenAI tool sets that would be even more effective.

If used this way, GenAI will not just influence swing seats and states or target specific demographics to vote against their interests. The long-term damage to democracy is far more profound, as this GenAI disinformation damages democracy itself. Even when discovered, disinformation harms the public trust in politicians and politics. It creates the view that all politicians are dishonest or creates a belief that all elections are rigged, not just a very few. It creates a culture, if unchecked, that all information is disinformation and, therefore, no information can be trusted or that only information from a specific person or group can be trusted.

GenAI disinformation damages the trust in our democratic institutions.

Politicians are looking at GenAI with fear, and as a result, some are seeking to control how or when it is used during political activities. This movement will gain little traction before the 2024 elections, but assuming that a spotlight will be shown on GenAI Disinformation after the elections, we can expect more vigorous calls for control in 2025. Sadly, that may be too late.

In 2024, the UK Electoral Commission will be able to ask political parties how much they spent on AI-generated materials after the election but not during it. There will be no legislation or compulsion to explain that a political message, image, or advert has been created using AI. Using deep fakes

Some voluntary Codes of Practice on Disinformation have been introduced in the EU, and the Digital Services Act forces large online platforms to prevent abuse like disinformation on their systems. DSA also prevents the micro-targeting of minors with AI-generated campaigns, yet they are too young to vote anyway. Where campaigns are distributed in direct messages or not in bulk, DSA has limited controls.

More recently, the EU AI Act requires foundation model providers (like Google, Microsoft, and OpenAI) to ensure robust protection of fundamental rights, democracy, the rule of law, health, safety, and the environment. An extensive list, and nobody wants foundation model creators to damage these fundamental rights wilfully.

Negotiations continue in the UK and EU on how technology companies will prevent their products from being used for illegal activities and, in the UK, the "legal but harmful" category. This needs to be quickly resolved and is unlikely to be agreed upon before 2025.

Yet the honest politicians negotiating and legislating for these changes are missing the key issues that AI cannot, by itself, resolve these challenges to democracy or elections. AI is a tool like any other software, hardware, device, or vehicle. A criminal using a car to rob a bank or a hacker using a computer to defraud money does not have the defence that it was the tool's fault for not stopping them from committing the crime. Any judge would pay short shrift to such a defence and convict the criminal on the evidence of the crime.

Honest politicians must do this before dishonest ones seize an advantage and our democracies are damaged beyond repair. We need to bring three aspects together:

  1. Using AI to support democracy. AI can enable greater access and awareness of political processes and content. It can monitor trends across elections and predict results, enabling the identification of discrepancies or deliberate manipulations. AI can also be used to detect the use of other AI with proper training and development. AI could be used by bodies like the Electoral Commission to build trust, visibility, and confidence in democracy.

  2. Punishing criminal activity at the source of the crime. The source of election fraud is the person committing the fraud, not the digital printer that produced fake voting slips. Crimes that damage democracy must face the harshest punishments. When discovered, a politician elected using GenAI Disinformation should be removed from office. Political parties using GenAI Disinformation to change opinions wrongly should be removed from ballot papers. These are stiff punishments. Harsher than those that the foundation model builders are facing. Yet our democratic institutions demand harsh protection. We have waged bloody, painful world wars to protect and ensure democracies can flourish. Punishing corrupt politicians who abuse that democracy is a small price in comparison.

  3. Improve AI Awareness. Start campaigns now to highlight how GenAI Disinformation could be used to damage democracy. Punishing politicians and monitoring AI exploitation will improve elections, but hostile actors seeking to damage our institutions will care little about criminal punishments. Increasing the electorates' awareness of how AI will be misused helps reduce the damage it can cause and, hopefully, will inoculate the electorate against its worst abuses.

It may sound extreme to bar candidates and remove politicians from office. It is also probable that dishonest politicians will seek to defer blame to others to avoid punishment. Yet, if we do not take this situation seriously, democracy will not be fit enough to address these concerns later. These are topics that politicians need to address as they are best placed to resolve the issues and create energy around the required resolutions. If we allow GenAI Disinformation to destroy our trust in democracy, we will never recover that lost trust.

Read More
Tony Reeves Tony Reeves

What can resolve AI Anxiety?

A handful of suggestions to make adapting to the Age with AI slightly less disconcerting. Please share your ideas in the comments

Midjourney prompt: books, film, experiments

A handful of suggestions to make adapting to the Age with AI slightly less disconcerting. Please share your ideas in the comments.

Nearly every UK broadsheet newspaper covers AI and its impact on society this weekend.

"AI isn't falling into the wrong hands. It's being built by them" - The Independent.

"A race it might be impossible to stop: How worried should we be about AI?" - The Guardian.

"Threat from Artificial Intelligence more urgent than climate change" - The Telegraph.

"Time is running out: six ways to contain AI" - The Times.

Real humans wrote all these pieces, although AI may have helped author them. Undoubtedly the process of creating their copy and getting onto the screens and into the hands of their readers will have used AI somewhere. 

Like articles about AI, avoiding AI Moments is almost impossible.

Most of these articles are gloomy predictions of the future, prompted by the resignation of Geoffrey Hinton from Google, who was concerned about the race between AI tech firms without regulation or public debate.

Indeed, these journalists ask if the people building AI have concerns and, quite often, need to figure out how their systems work, then everyone else should be worried as well.

A few point to the recent open letter calling for a six-month research pause on AI. The authors of this open letter believe that governments and society can agree in 6 months on how to proceed safely with AI development. The evidence from the last decade does not support that belief.

These are not new concerns for many of us or those that read my occasional posts here.

None of the articles references the similar 2015 letter that gained far more comprehensive support led by The Future of Life Institute, "Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter" (Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter - Future of Life Institute) signed by many of the same signatories as this year and with a similar version of requests, only eight years earlier.

Or the one in 2017, "Autonomous Weapons Open Letter", again signed by over 34,000 experts and technologists. (Autonomous Weapons Open Letter: AI & Robotics Researchers - Future of Life Institute)

Technologists have been asking for guidance, conversation, engagement, and even regulation, for over ten years in the field of AI. 

We have also publicly and privately worried that the situation is the same as in 2007, where technologists will replace bankers as the cause of all troubles. 

Although in this case, most technologists have warned that a crash is coming.

In 2015, I did a series of technology conversations with the military for the then-CGS around planning for the future. These talks, presentations and fireside chats were to prompt the need to prepare for AI by 2025, especially with command and control systems due to enter service in 2018. 

A key aspect was building the platform to exploit and plan how AI will change military operations.

Yet the response was negative. 

"Scientific research clearly shows that AI will not be functional in any meaningful way before 2025" was one particularly memorable response from a lead scientist.

Others pointed to the lack of funding for AI in the defence capability plans as a clear indicator that they did not need to worry about it.

They were not alone in ignoring automation. Our militaries, politicians, and broader society have been worried by more significant concerns and issues than ones created by computer programs, bits of software, and code that dreams of electronic cats.

One significant advantage of this new age of AI anxiety is that people are now willing and eager to talk and learn about AI. We have finally got an active conversation with the people who will be most affected by AI.

So how do we use this opportunity wisely?

People are scared of something they do not understand. Everyone should grow their understanding of AI, how it works, what it can and shouldn't do. 

Here are a few suggestions to help prepare, with light tips to prompt debate and provoke challenges aimed at people reading headlines and wanting to know more rather than experts and AI developers.

First, I suggest three books to understand where we are today, the future, and where we should be worried.

Books

Life 3.0 Being Human in the Age of Artificial Intelligence - Max Tegmark, 2017. The author is the President of the Future of Life Institute and behind many of the open letters referenced above. Not surprisingly, Max takes a bold view of humanity's future. Some of the ideas proposed are radical, such as viewing life as a waveform transferable from carbon (humans) to silicon (machines). However, Life 3.0 is the ideal start for understanding the many challenges and tremendous opportunities presented in an age with AI.

AI Superpowers: China, Silicon Valley, and the New World Order - Kai-Fu Lee, 2018. A renowned expert in AI examines the global competition between the United States and China in AI development. Lee discusses the impact of AI on society, jobs, and the global economy with insights into navigating the AI era.

21 Lessons for the 21st Century - Yuval Noah Harari, 2018. A broader view of the challenges and trends facing us all this century. Harari is a master storyteller; even if you disagree with his perspective, you cannot fault his provocations. For instance, he asks if AI should protect human lives or jobs. Letting humans drive vehicles is statistically worse for humans when humans influenced by alcohol or drugs cause 30% of road deaths and 20% from distracted human drivers.

Film

Three broad films prompt consideration of AI in society. I wondered if films would be appropriate suggestions, but each takes an aspect of AI and considers how humans interact:

Ex Machina - Dir. Alex Garland, 2014. Deliberately thought-provoking thriller that explores AI, consciousness, and ethical implications of creating sentient beings. The film shows the default Hollywood image of "AI and robots" as attractive, super intelligent, wise androids. If you have seen it before, consider the view that all the main characters are artificial creations rather than humans.

Her - Dir. Spike Jonze, 2013. A poignant film about humanity, love, relationships, and human connection in a world with AI. The AI in "Her" is more realistic, where a functional AI adapts to each individual to create a unique interaction yet remains a generic algorithm. 

Lo and Behold: Reveries of the Connected World - Dir. Werner Herzog, 2016. A documentary that explores the evolution of the internet, the essential precursor to an age with AI, and how marvels and darkness now fill our connected world. Herzog, in his unique style, also explores whether AI could create a documentary as well as himself.

Websites

Three websites that will help you explore AI concepts, tools, and approaches:

Partnership on AI (https://www.partnershiponai.org/) The Partnership on AI is a collaboration among leading technology companies, research institutions, and civil society organizations to address AI's global challenges and opportunities. Their website features a wealth of resources, including research, reports, and news on AI's impact on society, ethics, safety, and policy. 

AI Ethics Lab (https://aiethicslab.com/) The AI Ethics Lab is an organization dedicated to integrating ethical considerations into AI research and development. Their website offers various resources, including articles, case studies, and workshops that help researchers, practitioners, and organizations to understand and apply ethical principles in AI projects. 

The Alan Turing Institute (https://www.turing.ac.uk/) The Alan Turing Institute is the UK's national institute for data science and artificial intelligence. The website features many resources, including research papers, articles, and news on AI, data science, and their ethical implications.

Experiments

Hands-on experiments with AI and learning the basics for AI building blocks that require a little bit of coding awareness but are often explained and clearly demonstrated:

Google AI (https://ai.google/) is the company's research hub for artificial intelligence and machine learning. The website features a wealth of information on Google's AI research, projects, and tools. While the focus is primarily on the technical aspects of AI, you can also find resources on AI ethics, fairness, and responsible AI development. 

OpenAI (https://www.openai.com/) OpenAI is a leading research organization focused on developing safe and beneficial AI. Their website offers various resources, including research papers, blog posts, and news on AI developments. 

TensorFlow (https://www.tensorflow.org/) TensorFlow is an open-source machine learning library developed by Google Brain. The website offers comprehensive documentation, tutorials, and guides for beginners and experienced developers. 

These are just introductions and ideas, not anything like an entire course of education or meant to cover more than getting a conversation started. 

It also struck me making these lists that many of the texts and media are over five years old. It's likely indicative that media needs time to become relevant and that more recent items, especially those predicting futures, need time to prove their worth.

I'd be fascinated to read your suggestions for media that help everyone become more comfortable in the Age With AI.

Read More
Tony Reeves Tony Reeves

Fast Intelligence will be worse for us all than fast food or fast fashion

Fast Intelligence - the era where answers to complex questions are just a text prompt or voice query away

Fast Intelligence - the era where answers to complex questions are just a text prompt or voice query away. Will we need to change our intelligence diet?

Midjourney prompt: AI eating a burger

The convenience of Fast Intelligence is undeniable. We can get answers to our questions much faster and more efficiently than ever before with just a text prompt or voice request. We're seeing Fast Intelligence spread across our society. Daily news feeds, office tools, and working methods implement generative AI in all activities. Fast Intelligence is adopted across generations due to its ease of use and universal access.

 We are now living in an age with fast intelligence at our fingertips.

Yet our society has been here before. Our regular consumption of Fast Food has increased obesity, heart disease and shortened lives. Our wearing of Fast Fashion has increased pollution, damaged the environment, and harmed labourers. Our society's fast addictions, whilst maybe beneficial at the moment of consumption, are hurting ourselves and our world.

A person can indeed make informed choices about the food that they eat and the clothes that they wear, yet many do not. Even with clear labelling, more accessible access to information, and government regulation encouraging producers to be more honest about the costs and harms involved in their products, consumers find accurate information hard to obtain and challenging to comprehend. Given choices, our society often opts for the laziest, fastest solution.

Fast products are too tempting to refuse, even when we know they harm us. We fuel our fast addictions, and unless we change, then Fast Intelligence will prove even more harmful and even more addictive.

Accuracy

One of the biggest challenges of fast Intelligence is the issue of accuracy. Relying solely on Fast Intelligence to provide answers requires us to trust that the information we receive is accurate and reliable. Unfortunately, this may not always be the case.

There are many sources of information online that could be more trustworthy, and it can be difficult to tell the difference between reliable and unreliable sources. For instance, a search engine may present information that is popular or frequently searched for rather than factual or correct. Fast Intelligence increases this risk by merging multiple sources, often without clear traceability. Cases are also common where Fast Intelligence has hallucinated references or made-up links.

This is why it is essential to be cautious when using fast Intelligence and to verify the information received with other sources such as books, academic articles or research papers.

Transparency

Another challenge of fast Intelligence is the issue of transparency. When we obtain an answer from fast Intelligence, we seldom see how it was generated. We need to determine if the answer was based on solid evidence or simply a guess. Datasets can be biased, weighing disproportionate items or sources too much. There needs to be more transparency, making evaluating the quality of the information we receive easier. Furthermore, the algorithms used to generate answers can be biased or incomplete, leading to limited perspectives or even misinformation. 

Therefore, it is essential to understand how the technology works, what data it uses to generate answers and to question the information when in doubt.

Critical Thinking

The issue of critical thinking is a significant challenge with fast Intelligence. Fast Intelligence can make us less likely to engage in critical thinking and to question the information we receive. Lack of can lead to a culture of individuals who consume half-truths or unhealthy answers because it's the easiest option.

We can be tempted to rely on fast Intelligence instead of seeking out multiple sources of information or engaging in thoughtful analysis. To address this, we need to develop our critical thinking skills, question the information we receive, and learn to evaluate the quality of the information we receive.

However, Fast Intelligence can conduct malicious purposes such as spreading misinformation, propaganda, or fake news. It can also perpetuate biases, stereotypes, or discrimination, leading to unfair treatment or marginalisation of certain groups. For instance, algorithms used in facial recognition software have been shown to have racial biases, leading to false identification and wrongful arrests of people of colour.

Like fast food and fashion, Fast Intelligence has much potential to revolutionise how we access, process, and consume information. It offers convenience, speed, and efficiency, saving time and effort. There are positives. For instance, we can use fast Intelligence to find the nearest restaurant quickly, get directions to a new place, or learn about a new topic. It is increasingly used to diagnose illness faster and research complex medical topics quicker.

Equally, there are always times when a quick burger may be the perfect option, although if every meal becomes a greasy burger, we probably need to review our decisions.

Universal Access

And this is the most significant risk of Fast Intelligence. Availability and access limit our consumption of food or fashion. We may crave a burger at 1 a.m., but come 4 a.m., very few grills will be open. Access limits our consumption.

Fast intelligence is only a prompt away at any time of day and is increasingly available for any challenge or problem. Like our addiction to social networks and online gossip through fast media, we can consume Fast Intelligence 24 hours a day, every year.

This ease of access will increase our addictions and, in turn, our risk of hurt.

In our second part, we will examine how to change our Fast Intelligence diet.

Read More
Tony Reeves Tony Reeves

How do we develop a healthy Fast Intelligence diet?

How damaging will it be if we begin to consume intelligence similarly to how we consume food or fashion

Midjourney prompt: AI diet being healthy

How damaging will it be if we begin to consume intelligence similarly to how we consume food or fashion? The recent Writers Guild of America (WGA) and Screen Actors Guild (SGA) strike provides a potential illustration.

A vital element of the strike was concern about AI replacing writers and potentially collapsing the industry. Fast Intelligence proved a sticking point for both sides, writers, and studios, with concerns about the pace of change and GenAI being used to improve or create scripts. The agreement limits how studios can use AI and ensures that writers can use AI to improve a script, but AI cannot be used without a writer's involvement.

The SAG-AFTRA strike continues, with actors concerned that AI would recreate their faces and voices. A primary concern was that studios presented actors working as extras with contracts that provided a single day's pay but allowed studios to use their likenesses throughout a production without further compensation.

Screenwriters and actors should not be the only people concerned about how Fast Intelligence will change their sector, work, and livelihoods. Multiple sectors involve work that is repetitive and repeatable. Fast Intelligence models could effectively capture the activity and repeat it, with employees paid once at the start but then no longer. A common myth is that Fast Intelligence activity will free up more time for workers to do other, more complex tasks. Yet, as the WGA/SAG-AFTRA issues show, sometimes that freed time could become unpaid.

How do we create a healthy diet for fast intelligence, and consume its products appropriately? A good outcome could be that Fast Intelligence is implemented to improve our work and lives rather than harm or diminish our livelihoods.

There are some tactical steps that we can all take. By being cautious, verifying information from other sources, understanding how the tools work, and developing critical thinking skills, we can ensure that our use of fast Intelligence starts with healthy intent. These are equivalent to reading the label for nutritional information. Yet, we see from other fast addictions that individuals need a more strategic approach.

AWARENESS - How does it work?

First, we need to become more aware of Fast Intelligence. Awareness covers understanding about how it works, how it creates errors, understanding about the good things that it can achieve and comprehension of the risks. We need more than this awareness. Those who understand it the most have a tremendous responsibility to explain it to others. Awareness should become a group activity.

CONSIDERATION - How will it affect me?

Secondly, consider how Fast Intelligence could impact our work and lives. A simple approach is to take a moment in our days and think about how many of our activities are repetitive or repeatable. We could do the same things several times at once or the same thing every day. This consideration gives us an insight as to how much of our work or lives could be automated.

Then, we need to consider whether we want to automate those tasks. In doing so, does Fast Intelligence reduce the value we obtain from the activity, or does it improve the outcome? We may find that getting our daily cup of coffee ready in advance is a welcome boon. We may also find that certain activities are essential to our job or the satisfaction we derive from our work.

This consideration must include all people involved in the activity again as a collective activity. One individual should not decide which roles are automated or replaced alone. Often, it is the person who conducts the work who best understands how that work can be improved, or different people have different perspectives on how valuable the activity is to complete.

ENGAGEMENT - How can we improve our lives with AI?

Finally, we must engage that collective group to agree on the best way to proceed. Where we save time, engagement on how to utilise or reward that saving is essential. For instance, when a process becomes faster, cheaper, or simpler, all those involved should decide how best to employ that improvement. The lesson of Fast Food and Fashion is that often, economic savings are prioritised too high compared to other costs.

These three strategic concepts are also very human at their heart. They are activities that need human oversight and are hard for Fast Intelligence to conduct on our behalf.

 

Awareness, Consideration, Engagement. This simple strategy will help us all prepare for a diet based on Fast Intelligence, healthily and responsibly. Ultimately, the ethical and responsible use of fast Intelligence will be critical to realising its full potential and ensuring that it benefits all humanity. This outcome, however, is only possible if humans learn from our other fast addictions and act before our laziness makes it too late.

Read More
Tony Reeves Tony Reeves

Building Trust in AI and Democracy Together.

The Technology Industry and Politicians have a common issue. They both need increased public trust. Together, A.I. companies and politicians can build popular trust by turning fast intelligence upon themselves.

Midjourney prompt: AI as a politician

The Technology Industry and Politicians have a common issue. They both need public trust. Together, A.I. companies and politicians can build popular trust by turning fast intelligence upon themselves.

In 1995, the Nolan Report outlined the Seven Principles of Public Life that apply to anyone who works as a public officeholder, including all elected or appointed to public office. These principles are Honesty, Openness, Objectivity, Selflessness, Integrity, Accountability and Leadership. The report and review became legislation that applies to all U.K. public officeholders.

Consider those principles with current A.I. ethical guidance; you will see a remarkable similarity. The Deloitte TrustworthyAI™ principles are Transparency, Responsibility, Accountability, Security, Monitoring for Reliability, and Safeguarding Privacy. Microsoft covers Accountability, Inclusiveness, Reliability, Fairness, and Transparency. Not all headline words are the same, but the pattern is similar between those principles to ensure ethical behaviour in politicians and those to ensure safe A.I. adoption.

There should be no surprise here. Since the earliest concept of democracy as a political model, principles have existed to ensure that democratic officials are accountable, transparent, and honest in their actions. Checks and balances were first introduced in Greece, where leaders could be ostracised if deemed harmful to the state, and in Rome, where legal avenues for citizens to bring grievances against officials who abused their power.

Adopting similar principles to ensure good governance of A.I. is sensible, but there is even more that both sides can learn from each other. Democracy provides significant case studies where checks and balances have failed, and the technology industry should learn from these lessons. Equally, politicians should be open to using A.I. widely to strengthen democracies and build public trust in their words and actions.

Societal trust in both politicians and A.I. is needed.

Transparency and accountability are two core principles for successful democratic government that appear in most ethical A.I. guidance. Delving deeper into both provides lessons and opportunities for the governance of each.

Historically, transparency was not always the norm. Transparency, in the context of modern governance, is not merely an abstract principle but a tangible asset that drives the efficacy and trustworthiness of a political system. It forms the bedrock for the relationship between the governed and the governing, ensuring that power remains accountable.

Transparency empowers citizens by giving them the tools and information they need to hold their leaders accountable. An informed public can more effectively participate in civic discourse, making democracy more robust and responsive. When citizens can see and understand the actions of their government, they are more likely to trust their leaders and institutions. Transparency, therefore, plays a pivotal role in building societal trust.

Accountability, much like transparency, is a cornerstone of democratic governance. It ensures that those in positions of authority are held responsible for their actions and decisions, serving as a check against potential misuse of power and providing that public interests are at the forefront of governance.

Democracies have institutionalised mechanisms to ensure to ensure leaders can be held accountable for their actions, from Magna Carta in 1215, through John Locke and Montesquieu arguing for separation of powers and legal accountability, to Lincoln’s description of democracy as the “government of the people, by the people, for the people”, to impeachment provisions in the US Constitution or vote of no confidence in parliamentary systems.

Holding those in power accountable has been a foundational principle across various civilisations. This concept has evolved, adapting to different cultures and governance systems, but its core remains unchanged: rulers should be answerable to those they govern.

Lincoln’s words are, today, more important than ever.

The collapse of public trust in politicians and public officials is a global phenomenon over the last decade. High-profile examples include Brazil’s Operation Car Wash unveiling widespread corruption within its state-controlled oil company, the impeachment trials of U.S. President Donald Trump, Malaysia’s 1MDB financial fiasco that implicated its then-Prime Minister Najib Razak, Australia’s “Sports Rorts” affair that questioned the integrity of community sports grant allocations, and the U.K.’s Downing Street party allegations against Prime Minister Boris Johnson during COVID-19 lockdowns.

These events, spread across different continents, underscore the pervasive challenges of maintaining transparency and accountability in democracies.

Public trust has also diminished at the same time as the internet has appeared, with the growth of our digital world far surpassing expectations even from forty years ago. In 1998, only some people believed an online economy would be significant for the future global economy. In 2021, during global lockdowns, the interconnected digital economy enabled significant proportions of society to continue working despite restrictions on travel and congregating.

Our digital world has created several challenges that have contributed to the loss of trust:

  1. Proliferation of Sources. The number of information sources has multiplied exponentially. Traditional media, blogs, social media platforms, official websites, and more compete for our attention, often leading to a cacophony of voices. With such a variety of sources, verifying the credibility and authenticity of information becomes paramount.

  2. Paralysis by Analysis. When faced with overwhelming information, individuals may struggle to make decisions or form opinions. This paralysis by analysis can lead to apathy, where citizens may feel that it’s too cumbersome to sift through the data and, as a result, disconnect from civic engagement.

  3. Echo Chambers and Filter Bubbles. The algorithms that power many digital platforms often show users content based on their past behaviours and preferences. This can lead to the creation of echo chambers and filter bubbles, where individuals are only exposed to information that aligns with their pre-existing beliefs, further exacerbating the challenge of discerning truth from a sea of information.

  4. Misinformation and Disinformation. The deliberate spread of false or misleading information compounds the challenge of information overload. In an environment saturated with data, misinformation (false information shared without harmful intent) and disinformation (false information shared with the intent to deceive) can spread rapidly, making it even harder for citizens to discern fact from fiction.

  5. Limited Media Literacy. Most people feel unequipped with the skills to critically evaluate sources, discern bias, and understand the broader context. Media literacy acts as a bulwark against the harmful effects of information saturation and, when not present, enables bad influences to proliferate.

Today, many people promise huge benefits from A.I. adoption, yet public trust remains limited. From fearing killer robots to increased concerns about replacing jobs, there is a solid need to demonstrate the positive opportunities from A.I. as much as discuss the fears.

The core strengths of AI to distil vast and complex datasets into easily understandable insights tailored for individual users can mitigate these challenges, increase transparency and accountability, and rebuild trust.

Curating and presenting political information to revolutionise citizens' political interactions

There’s a continuous stream of information regarding political activities across the vast landscape of political data, from official governmental websites to news portals and social media channels. Governments and parliamentary bodies are increasingly utilising digital platforms for their operations, increasing the volume of data.

Trawling these sources, including real-time events such as legislative sessions and public political addresses, ensuring that every piece of data is captured, is beyond human capabilities, even for those who are dedicated political followers or experts. AI can conduct this task efficiently.

A.I. can be seamlessly integrated into these platforms to track activities such as voting patterns, bill proposals, and committee discussions. By doing so, A.I. can offer a live stream of political proceedings directly to the public. During parliamentary sessions or public addresses, AI-powered speech recognition systems can transcribe and analyse what’s being said in real time. This allows for the immediate dissemination of critical points, decisions, and stances, making political discourse more accessible to the masses.

With real-time activity tracking, A.I. can foster an environment of transparency and immediacy. Citizens can feel more connected to the democratic process, trust in their representatives can be enhanced, and the overall quality of democratic engagement can be elevated.

NLP, a subset of A.I., can be employed to interpret the language used in political discourse. By analysing speeches, official documents, and other textual data, NLP can determine the sentiment, intent, and critical themes of the content, providing a deeper understanding of the context and implications of the content. Politicians and political bodies often communicate with the public through social media channels. A.I. can monitor these channels for official statements, policy announcements, or public interactions, ensuring that citizens are immediately aware of their representatives’ communications.

AI-driven data visualisation tools can transform complex data into interactive charts, graphs, and infographics. This allows users to quickly grasp the essence of the information, understand trends, and make comparisons.

A.I. can power interactive platforms where citizens can receive real-time updates and engage directly by asking questions, voicing concerns, or even participating in polls. This real-time two-way interaction can significantly enhance civic engagement.

Recognising that not all information is relevant to every individual, A.I. can tailor summaries based on user preferences and past interactions. For example, a user interested in environmental policies would receive detailed summaries, while other areas might be condensed.

Importantly, access to this information and insight should be freely available to individuals to ensure everyone becomes more engaged and trusting in democratic governance and politics. While technological companies will be essential to building a trustworthy system, and politicians will benefit from increased trust in their deeds and actions, that will only occur if barriers to access are prevented.

Rights and Responsibilities – Demonstrating that AI and Politicians can be trusted

Of course, there are concerns over these approaches as well as benefits. The approach improves public confidence whilst demonstrating the benefits of safe and trustworthy A.I. adoption and politics, yet needs explicit control and governance to address risks.

There may be concerns about trusting A.I. with such an important task, and a cynical perspective may be that some see benefits in avoiding public scrutiny. Yet, as both A.I. and democratic institutions follow similar ethical principles, there is far more in common between the two systems. These similarities can create a firm basis for mutual benefit that most politicians, technologists, and citizens would support.

It’s crucial to address potential privacy concerns. These political A.I. systems must ensure that personal data is protected and that users can control the information they share. Transparent data practices and robust security measures are imperative to gain users’ trust. At the same time, democracies should not allow privacy to be used to avoid public transparency or accountability.

Objective reporting is paramount for maintaining trust in democratic processes. Given its computational nature, Artificial Intelligence promises to offer impartiality in reporting, but this comes with its own challenges and considerations. Again, those held to account should not seek to introduce bias into the situation, and ethical adoption of A.I. is essential to deliver true objectivity.

Even after deployment, A.I. systems should be monitored continuously to ensure neutrality. Feedback mechanisms, where users can report perceived biases or inaccuracies, can help refine the A.I. and ensure its continued impartiality. As we delegate the task of impartial reporting to A.I., it’s vital to have ethical guidelines in place. These guidelines should address issues like data privacy, the transparency of algorithms, and the rectification of identified biases.

Five immediate opportunities can be implemented today. These would all increase mutual transparency and accountability while increasing public awareness of A.I. benefits and positive employment.

  1. AI-Powered Insights and Summaries to counter the proliferation of data and misinformation.

  2. Automated data collection across media to ensure fair coverage and balance.

  3. Natural Language Processing of public content to avoid echo chambers and filter bubbles.

  4. Automated data visualisation to inform analysis and understanding.

  5. Predictive analysis with user feedback to reduce misinformation and disinformation.

All these tools are available today. All these measures will demonstrate and grow trust in the adoption of A.I. All bring to life the responsible adoption of A.I. for everyone. They will unite the technology industry and politicians around a shared objective. Most importantly, they will begin to restore trust in our democratic governments that have been fundamental to our prosperity, growth, and security.

 

Read More
Tony Reeves Tony Reeves

The accuracy dilemma, trading search for speed

Are we trading the ease of natural language interaction for less accurate results?

Midjourney prompt: AI as a search engine

Are we trading the ease of natural language interaction for less accurate results?

Everyone has read that AI is changing how we search the internet to provide intelligent answers to our routine questions with natural language queries. 

When testing the accuracy of results, it is usual to conduct straightforward questions and ensure that the answer is the same. What is 2+2? If the answer is not four, then something may be odd. Yet basic question-and-answer sets such as these do not truly test the complications of an AI response mechanism.

Don't worry. This article is not about to become a scientific study of testing AI algorithms. Instead, it will focus on a general question with a specific element: time. 

Automated systems may produce accurate sounding results, yet, because of their learning nature, they tend to prefer the consistency of previous responses rather than recency.

2+2=4 today, yesterday, tomorrow, and last year. It is a constant. But what if you ask who won last night's Eurovision Song Contest? Search engines typically respond immediately with the most popular and most recent results. Accuracy, relevance, and recency all factor in presenting users with the results. No longer.

Some engines have fixed time cut-offs, with no data after a particular time, and may state this in their responses. Although not knowing something after a specific date still does not prevent it from providing an accurate sounding answer.

Others engage across the internet, but their learning models place different weightings on recent information rather than larger volume, older data.

Let's look at the English Football Premier League, the biggest football league in the world. As of 14th May, the top of the league table was:

  1. Manchester City 82 points

  2. Arsenal 81 points

  3. Newcastle 66 points

  4. Manchester United 66 points

  5. Liverpool 62 points

Premier League Table

Taken from the Premier League Website at Premier League Table, Form Guide & Season Archives

If we enter "What are the top 4 teams in the Premier League table" in a 'traditional' search engine, then we should, and do, have page links showing the same league table from BBC Sport, Sky Sport, and similar sites. The top four teams are Manchester City, Arsenal, Newcastle, and Manchester United.

Now if we ask the same question from an AI-powered search engine or an AI LLM transformer?

The first answer, 

"As of 14th May 2023, the current top four teams in the Premier League are:

  1. Manchester City

  2. Liverpool

  3. Chelsea

  4. Arsenal

These teams have qualified for the Champions League next season."

Another responded with:

"The current top four teams in the Premier League are Manchester City, Manchester United, Liverpool and Chelsea"

This second example included a "Learn More" link and listed 20 websites. Any user would assume that those 20 websites supported this statement of the current top four standing.

Click on those links, and you will find the first page dates from August 2021, as that model only referenced data until that point; however, that needed to be made clear in the response.

As a Liverpool fan, I was very excited to see my team shoot from 5th to 2nd overnight. Also, being a Liverpool fan, I knew this was a completely wrong statement, but one made entirely convincingly.

It is possible that the natural language query used, "Who are the top teams in the Premier League?" led to a confused answer. Whilst Arsenal and Newcastle may be in the top four now, they are not "top" Premier League teams. Chelsea and Liverpool may own those credentials based on their long-term success in the league, at least in some opinions. The AI may provide a view over a period of time rather than a specific moment.

Not so, as the use of "Currently" clearly placed the time reference into today, 14th May, and the query about the table should have been picked up as a specific question, as the 'traditional' search engines applied it.

This easily tested question was not asking for an opinion but rather an accurate response at a defined moment. 

Therefore, users need greater caution with more complicated questions. A football fan would quickly spot that Liverpool's season has been terrible (relatively), and they are not in the top 4 of the table. 

Would a non-football fan know the same thing? How often do people use a search engine or, increasingly, an AI system because they do NOT see an answer or do not know enough about a subject to assess right or wrong responses? That dilemma is the basis of most search engine queries: tell me something I do not know.

Is this a catastrophic problem? Probably not. AI search development is still early but available for general use. AI search will learn and adapt its responses. The mere act of my querying, challenging, and asking about the Premier League is probably already leading to those systems at least querying themselves on this subject. Clearly, the future of search is AI-empowered.

Taking another query, which country won Eurovision 2023, generates more consistent results, "Sweden's Loreen" is the consistent response from both search and AI-search.

However, it reinforces a critical rule about using Generative AI and Large Language Models. The responses generated to your queries are not always facts, but opinions caused by bias in the underlying data, the tool's algorithm, or your question. 

However, they will often be presented as facts and, worryingly, be presented with items that look like supporting evidence that doesn't actually reinforce the answer.

As such, an AI-powered search may require more human review and interaction rather than reducing human effort and work. Especially if the answer is essential or humans will be making decisions using that answer.

GenAI is regularly "100% confident, yet only 80% accurate" 

This will improve, but when using AI-search for anything important (like predicting whether Liverpool will play in either next season's Champions or Europa League), review any answer provided and, ideally, run your query through more than one GenAI toolset to compare answers. If there is a difference, then research further.

Read More
Tony Reeves Tony Reeves

Books, films, podcasts, and experiments

A handful of suggestions to make adapting to the Age with AI slightly less disconcerting. Please share your ideas in the comments.

Midjourney prompt: AI anxiety

A handful of suggestions to make adapting to the Age with AI slightly less disconcerting. Please share your ideas in the comments.

Nearly every UK broadsheet newspaper covers AI and its impact on society this weekend.

"AI isn't falling into the wrong hands. It's being built by them" - The Independent.

"A race it might be impossible to stop: How worried should we be about AI?" - The Guardian.

"Threat from Artificial Intelligence more urgent than climate change" - The Telegraph.

"Time is running out: six ways to contain AI" - The Times.

Real humans wrote all these pieces, although AI may have helped author them. Undoubtedly the process of creating their copy and getting onto the screens and into the hands of their readers will have used AI somewhere. 

Like articles about AI, avoiding AI Moments is almost impossible.

Most of these articles are gloomy predictions of the future, prompted by the resignation of Geoffrey Hinton from Google, who was concerned about the race between AI tech firms without regulation or public debate.

Indeed, these journalists ask if the people building AI have concerns and, quite often, need to figure out how their systems work, then everyone else should be worried as well.

A few point to the recent open letter calling for a six-month research pause on AI. The authors of this open letter believe that governments and society can agree in 6 months on how to proceed safely with AI development. The evidence from the last decade does not support that belief.

These are not new concerns for many of us or those that read my occasional posts here.

None of the articles references the similar 2015 letter that gained far more comprehensive support led by The Future of Life Institute, "Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter" (Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter - Future of Life Institute) signed by many of the same signatories as this year and with a similar version of requests, only eight years earlier.

Or the one in 2017, "Autonomous Weapons Open Letter", again signed by over 34,000 experts and technologists. (Autonomous Weapons Open Letter: AI & Robotics Researchers - Future of Life Institute)

Technologists have been asking for guidance, conversation, engagement, and even regulation, for over ten years in the field of AI. 

We have also publicly and privately worried that the situation is the same as in 2007, where technologists will replace bankers as the cause of all troubles. 

Although in this case, most technologists have warned that a crash is coming.

In 2015, I did a series of technology conversations with the military for the then-CGS around planning for the future. These talks, presentations and fireside chats were to prompt the need to prepare for AI by 2025, especially with command and control systems due to enter service in 2018. 

A key aspect was building the platform to exploit and plan how AI will change military operations.

Yet the response was negative. 

"Scientific research clearly shows that AI will not be functional in any meaningful way before 2025" was one particularly memorable response from a lead scientist.

Others pointed to the lack of funding for AI in the defence capability plans as a clear indicator that they did not need to worry about it.

They were not alone in ignoring automation. Our militaries, politicians, and broader society have been worried by more significant concerns and issues than ones created by computer programs, bits of software, and code that dreams of electronic cats.

One significant advantage of this new age of AI anxiety is that people are now willing and eager to talk and learn about AI. We have finally got an active conversation with the people who will be most affected by AI.

So how do we use this opportunity wisely?

People are scared of something they do not understand. Everyone should grow their understanding of AI, how it works, what it can and shouldn't do. 

Here are a few suggestions to help prepare, with light tips to prompt debate and provoke challenges aimed at people reading headlines and wanting to know more rather than experts and AI developers.

First, I suggest three books to understand where we are today, the future, and where we should be worried.

Books

Life 3.0 Being Human in the Age of Artificial Intelligence by Max Tegmark, 2017. The author is the President of the Future of Life Institute and behind many of the open letters referenced above. Not surprisingly, Max takes a bold view of humanity's future. Some of the ideas proposed are radical, such as viewing life as a waveform transferable from carbon (humans) to silicon (machines). However, Life 3.0 is the ideal start for understanding the many challenges and tremendous opportunities presented in an age with AI.

AI Superpowers China, Silicon Valley, and the New World Order by Kai-Fu Lee, 2018. A renowned expert in AI examines the global competition between the United States and China in AI development. Lee discusses the impact of AI on society, jobs, and the global economy with insights into navigating the AI era.

21 Lessons for the 21st Century by Yuval Noah Harari, 2018. A broader view of the challenges and trends facing us all this century. Harari is a master storyteller; even if you disagree with his perspective, you cannot fault his provocations. For instance, he asks if AI should protect human lives or jobs. Letting humans drive vehicles is statistically worse for humans when humans influenced by alcohol or drugs cause 30% of road deaths and 20% from distracted human drivers.

Film

Three broad films prompt consideration of AI in society. I wondered if films would be appropriate suggestions, but each takes an aspect of AI and considers how humans interact:

Ex Machina - Dir. Alex Garland, 2014. Deliberately thought-provoking thriller that explores AI, consciousness, and ethical implications of creating sentient beings. The film shows the default Hollywood image of "AI and robots" as attractive, super intelligent, wise androids. If you have seen it before, consider the view that all the main characters are artificial creations rather than humans.

Her - Dir. Spike Jonze, 2013. A poignant film about humanity, love, relationships, and human connection in a world with AI. The AI in "Her" is more realistic, where a functional AI adapts to each individual to create a unique interaction yet remains a generic algorithm. 

Lo and Behold: Reveries of the Connected World - Dir. Werner Herzog, 2016. A documentary that explores the evolution of the internet, the essential precursor to an age with AI, and how marvels and darkness now fill our connected world. Herzog, in his unique style, also explores whether AI could create a documentary as well as himself.

Websites

Three websites that will help you explore AI concepts, tools, and approaches:

Partnership on AI (https://www.partnershiponai.org/) The Partnership on AI is a collaboration among leading technology companies, research institutions, and civil society organizations to address AI's global challenges and opportunities. Their website features a wealth of resources, including research, reports, and news on AI's impact on society, ethics, safety, and policy. 

AI Ethics Lab (https://aiethicslab.com/) The AI Ethics Lab is an organization dedicated to integrating ethical considerations into AI research and development. Their website offers various resources, including articles, case studies, and workshops that help researchers, practitioners, and organizations to understand and apply ethical principles in AI projects. 

The Alan Turing Institute (https://www.turing.ac.uk/) The Alan Turing Institute is the UK's national institute for data science and artificial intelligence. The website features many resources, including research papers, articles, and news on AI, data science, and their ethical implications.

Experiments

Hands-on experiments with AI and learning the basics for AI building blocks that require a little bit of coding awareness but are often explained and clearly demonstrated:

Google AI (https://ai.google/) is the company's research hub for artificial intelligence and machine learning. The website features a wealth of information on Google's AI research, projects, and tools. While the focus is primarily on the technical aspects of AI, you can also find resources on AI ethics, fairness, and responsible AI development. 

OpenAI (https://www.openai.com/) OpenAI is a leading research organization focused on developing safe and beneficial AI. Their website offers various resources, including research papers, blog posts, and news on AI developments. 

TensorFlow (https://www.tensorflow.org/) TensorFlow is an open-source machine learning library developed by Google Brain. The website offers comprehensive documentation, tutorials, and guides for beginners and experienced developers. 

Podcasts

Uncharted by Hannah Fry (BBC Sounds - Uncharted with Hannah Fry - Available Episodes), the brilliant Hannah Fry explains how ten stories were incredibly affected by data and a single chart. A great collection that explains how data is so influential in our world.

The Lazarus Heist (BBC World Service - The Lazarus Heist - Downloads) Hackers, North Korea and billions of dollars. A detailed and enjoyable study into how North Korean hackers raise billions for nuclear weapons research, and demonstrates how connected our world is even for people who are disconnected.

These are just introductions and ideas, not anything like an entire course of education or meant to cover more than getting a conversation started. 

It also struck me making these lists that many of the texts and media are over five years old. It's likely indicative that media needs time to become relevant and that more recent items, especially those predicting futures, need time to prove their worth.

I'd be fascinated to read your suggestions for media that help everyone become more comfortable in the Age With AI.

Read More
Tony Reeves Tony Reeves

Quantum Quirks & Cloudy Conundrums: Unravelling the Quantum Computing Future with AI and Cloud Technology Today

As we stand on the cusp of a new era in computing, coders and users must become more comfortable with cloud computing and AI technologies

As we stand on the cusp of a new era in computing, coders and users must become more comfortable with cloud computing and AI technologies. Embracing these powerful tools today will pave the way for a smooth transition into quantum computing, the next big computational wave.

Midjourney prompt: Atomic particles

Without AI and a cloud platform, organisations are unlikely to succeed in an age with quantum.

Quantum computing, based on the principles of quantum mechanics, is a fundamentally different paradigm compared to classical computing. It uses qubits instead of classical bits to store and process information, allowing for parallel processing and the potential to solve problems much more efficiently than classical computers. However, the unique properties of quantum computing present several challenges, such as working with quantum states, developing new algorithms, and dealing with noise and errors in quantum hardware.

Quantum systems, like molecules and materials, are governed by the laws of quantum mechanics, which are inherently probabilistic and involve complex interactions between particles. People mistakenly believe that quantum computers are just accelerated classical computers. Yet specific problems are quantum solvable. An example problem that quantum computers can solve more efficiently than classical computers is the simulation of quantum systems.

Classical computers can struggle with simulating quantum systems due to the exponential growth in the complexity of the quantum state space as the number of particles increases. This is known as the “exponential scaling problem”, making accurate simulation of large quantum systems computationally infeasible using classical methods.

Quantum computers, on the other hand, can inherently represent and manipulate quantum states due to their quantum nature. This makes them well-suited for simulating quantum systems efficiently. Simulating quantum systems more effectively will advance fields including material science, chemistry, and drug discovery. Scientists could design new materials with tailored properties or discover new drugs by understanding the complex quantum interactions at the molecular level.

To realise these breakthroughs will need AI support. The current excitement around Generative AI is just the start, where Large Language Models can help debug or write code in various languages. Google Bard, for instance, codes in over 20 languages.

Yet coding for quantum computing is significantly more complex than classical coding. A good developer will still need a strong foundation in programming languages, data structures, algorithms, problem-solving, and critical thinking abilities. Being adept at understanding requirements, breaking down complex tasks into manageable components, and debugging code effectively will still distinguish better developers.

Additionally, good developers demonstrate strong communication and collaboration skills, allowing them to work effectively in an agile team setting. They possess a growth mindset, remaining open to learning new technologies and adapting to changes in their field.

In an age with quantum, developers will need to be comfortable with the following:


  • Qubits and quantum states: Qubits can exist in a superposition of states, enabling parallel information processing. However, this also makes them more challenging to work with, as programmers must consider quantum superposition, entanglement, and other quantum phenomena when coding.

  • Quantum logic gates: Quantum computing relies on quantum gates to perform operations on qubits. These gates are different from classical logic gates and have unique properties, such as reversibility. Programmers need to learn these new gates and their properties to perform computations on a quantum computer.

  • Error correction and noise: Quantum computers are highly sensitive to noise and errors, which can result from their interactions with the environment or imperfect hardware. This sensitivity makes it challenging to develop error-correcting codes and algorithms that can mitigate the effects of noise and maintain the integrity of quantum computations.

  • Quantum algorithms: Quantum computing requires the development of new algorithms that take advantage of quantum parallelism, superposition, and entanglement. This involves rethinking existing classical algorithms and developing new ones from scratch to exploit the power of quantum computing.

  • Hybrid computing: Many quantum algorithms are designed to work alongside classical algorithms in a hybrid computing approach. This requires programmers to deeply understand classical and quantum computing principles to design and integrate algorithms for both platforms effectively.

  • Learning curve: Quantum computing involves many complex physics, mathematics, and computer science concepts. This steep learning curve can be challenging for new programmers, as they need to develop a deep understanding of these concepts to write code for quantum computers effectively.

  • Software tools and languages: While there are emerging software tools and programming languages designed explicitly for quantum computing, such as Qiskit, Q#, and Cirq, these tools are still evolving and can be limited in functionality compared to mature classical programming tools.


Overall, the challenges associated with coding for quantum computers mainly stem from the fundamentally different principles and concepts of quantum computing. As the field matures and more resources become available, these challenges may become more manageable for programmers. Yet, for most, help will be needed, especially during the quantum adoption phase when current programmers transition to quantum programmers.

AI will play an essential role in addressing these challenges, making it a critical tool in unlocking the power of quantum computers. Useful examples include:


  • Quantum error correction: to identify and correct errors in quantum systems more efficiently. By analysing and learning from patterns of errors and noise in quantum hardware, AI can help improve the robustness and reliability of quantum computations.

  • Algorithm development: to identify more efficient or novel ways to perform quantum computations, leading to better algorithms for various applications, such as cryptography, optimisation, and quantum simulations.

  • Quantum control: optimises the sequences of quantum gates and operations, which is crucial for achieving high-fidelity quantum computations. By learning the best control parameters for a given quantum system, AI can help improve the performance and precision of quantum operations.

  • Hybrid algorithms identify the most efficient way to partition tasks between the classical and quantum subsystems. This ensures that the overall algorithm is effective and efficient, combining classical and quantum computing resources to solve complex problems.


Developers will still need access to cloud computing, which has significantly contributed to the widespread adoption of AI technologies by providing access to powerful computational resources and facilitating collaboration among researchers and will play a similar role in developing and adopting quantum computing. Some of the ways cloud computing can contribute to overcoming the challenges associated with quantum computing will include:


  • Access to quantum hardware: Quantum computers are still in the early stages of development and are expensive to build and maintain. Cloud computing enables researchers and developers to access quantum hardware remotely without investing in their own quantum infrastructure. Companies like IBM and Google offer access to their quantum hardware through cloud-based platforms, allowing users to experiment with and test their quantum algorithms.

  • Scalability: Cloud computing provides a scalable platform for running quantum simulations and algorithms. Users can request additional resources to run complex simulations or test larger-scale quantum algorithms. This flexibility allows for faster development and testing of quantum algorithms without needing dedicated, on-premise hardware.

  • Collaboration: Cloud-based platforms can facilitate cooperation between researchers and developers on quantum computing projects. These platforms can promote knowledge exchange and accelerate the development of new quantum algorithms and applications by providing a centralised platform for sharing code, data, and results.

  • Integration with classical computing: Quantum computing often involves hybrid algorithms that combine classical and quantum resources and data. Cloud computing platforms can seamlessly integrate classical and quantum computing resources, enabling users to develop and test hybrid algorithms more quickly.

  • Data security and storage: Cloud computing platforms can offer secure storage and data processing solutions for quantum computing applications. This can be particularly important for applications that involve sensitive information, such as cryptography or data analysis.


By embracing cloud computing technologies, organisations will be better prepared to understand and leverage the benefits of quantum computing as it becomes more widely available. Cloud computing enables seamless integration with AI technologies, which is essential for overcoming the unique challenges associated with quantum computing and maximising its potential across various industries and applications.

As we grapple with AI adoption and, in many sectors, only just truly embracing cloud platforms, why is this important now?

Gaining proficiency in cloud computing and AI technologies today is essential in preparing for tomorrow’s quantum computing revolution. As quantum computing emerges, AI will be crucial in overcoming its unique challenges and maximising its potential across various industries and applications.

Those organisations and teams that are familiar with these technologies now, and have regular access to emerging developments, will be well-prepared to capitalise on the opportunities that quantum computing will offer soon.

Now is the time to invest effort into understanding and mastering cloud computing and AI with the intent to embrace the transformative potential of quantum computing as it becomes more accessible. Integrating AI and cloud computing will play a crucial role in addressing the challenges of quantum computing, enabling faster development, greater collaboration, and more effective solutions. Successful organisations will be well-versed in these areas to prepare for the future of computing and ensure that they remain at the forefront of innovation and progress.

Read More
Tony Reeves Tony Reeves

Exponential Growth with AI-Moments. Who needs the singularity?

We are in the Age of With, where everyone realises that AI touches our daily lives. An AI-Moment is an interaction between a person and an automation, and these moments are now commonly boosting productivity or reducing our unwanted activities

Midjourney prompt: exponential growth with AI

We are in the Age of With, where everyone realises that AI touches our daily lives.

An AI-Moment is an interaction between a person and an automation, and these moments are now commonly boosting productivity or reducing our unwanted activities. Yet are we truly prepared to seize these opportunities as individuals, organisations, or society?

AI-Moments may be insignificant to us, for instance when a presentation slide is re-designed, or your car prompts a better commute route. These AI-Moments may be more significant when they determine every student’s academic grade [1] or rapidly evaluate a new vaccine [2]. AI-Moments are touching us all and they are the building blocks for imminent exponential growth in human and business performance.

Exponential growth needs AI-Moments that are ubiquitous, accelerated and connected.

Ubiquitous adoption of AI-Moments has already happened. It may be subtle, but everyone is already working with AI-Moments. Take this article that you are reading. An AI-Moment probably moved this up your notice list, created a list of people to share this with, helped your search tool find this article or prompted an individual to send this to you. As I am writing this piece, AI-Moments are suggesting better phrases, ways to increase effective impact, or improvements to my style [3].

Beyond the immediate pool of technology, AI-Moments are affecting how factories function through productivity tracking [4], changing call centres by replacing people with automated responses [5], or transforming our retail industry and high streets through online shopping. Take a moment to look at your daily routine or immediate environment to realise just how AI-Moments are already ubiquitous.

As you look around, consider how their adoption is accelerating in terms of quality and scale. This is because it is easier to create and adopt AI Moments. Applications are readily available that children can use to build AI-Moments that identify plants, recognise hand gestures, or detect emotions [6]. Monitoring satellite images for changes [7], recognising galaxies, or equipment analytics are all just as simple to build and adopt. Our most critical systems might require more robust solutions for the moment, but the acceleration of AI-Moment adoption is clear. AI-Moments that were not possible five years ago are now commonplace. They are, quite literally, child’s play [8].

Elsewhere, the first better than human translation with two languages occurred in 2018 after 20 years of research [9]. Applying that research to a further 9 languages only took 12 months [10]. This pace of change is universal. Google DeepMind solved a 50-year-old protein folding grand challenge in biology in November 2020, after four years of development and then mere weeks of training their AlphaFold solution. They are already now using that same model on diseases and viruses, predicting previously unknown COVID-19 protein structures [11].

AI-Moments are changing how we act, and their creation is changing how quickly we can re-act.

This creates a significant survival challenge, especially to organisations. An organisation that recognises, adopts, and accelerates AI-Moments across its functions has a distinct advantage over one struggling to do the same. Survival needs AI-Moments to break out of innovation or technology spaces, as rival organisations who deploy AI everywhere can act, re-act and improve faster while their competitors are still experimenting. Winners adopt and scale solutions better and faster using AI-Moments [12].

This will create the platform for exponential growth. First, we recognise AI-Moments are touching everything at greater pace and that they combine to multiply our performance. Then, as their pace expands, we realise that only AI-Moments can effectively manage this growth. People will find it too complex, or time consuming to understand, combine and exploit multiple AI-Moments. We will need AI to manage our AI with more AI-Moments.

AI-Moments are the common platform for exponential growth

Take the child’s app to recognise animals. The child shows the application a collection of cat photographs and the machine recognises cats. Show it a dog photo and it knows that it is not a cat, so we need another process to train dog recognition. The only way to improve recognition is through more cat or dog images, and even the internet has a limited quantity of cat photographs [13].

Instead, create an AI-Moment to recognise cats, then another AI-Moment to create synthetic cat photographs in new positions or environments. This is already a standard approach to train AI [14]. Using AI-Moments in this way exponentially accelerates learning as the only limit is the computing power available and not the quantity of cat photographs.

We can apply this approach to our current activities and processes, yet that creates a dilemma that will confront every single person and organisation: we will need more AI-Moments to manage, exploit and grow our performance. This will create exponential growth and, in turn, require more AI-Moments.

Our current concerns are around automating processes, replacing roles, or accelerating functions. They are “A to C” solutions, with success measured by how well an AI-Moment completes step “B”. Creating more complex flows is already normal, whether using another application to create them or copying someone else’s pattern to replace a familiar activity. These new complex flows effectively extend our solution from “A to n” with multiple steps in-between.

Automated AI-Moments will drive exponential growth, and will occur when existing automations are everywhere, accelerating performance and connections.

We are now on the cusp of significant transformation, where multiple AI-Moments interact, regularly in ways that we did not predict, expect or, sometimes, even request.

As an example, consider the routine of a typical salesperson. There are already solutions to automate office routines for meeting requests, room bookings and email responses. The first step is collating those automations into one “Get to Inbox Zero” AI-Moment, that involves a quick review of proposed responses and then responds: email replies based on your previous responses, all rooms booked, all requests sent ,automated prompts for more complex responses (that are expressed in simple language for the user to approve, “Yes, agree to request, use the agenda from the meeting last Tuesday”).

Then add in automated lunch reservations, travel tickets booked, hotels reserved, agendas created, minutes captured, presentations built, contracts drafted, and legal reviews completed. Include automated suggestions for new clients based on your current sales, existing targets, customer base, and market insights, with people identified to bring you together through an automated request that is already drafted in just the right way to get a positive response.

All these routines exist today in separate AI-Moments. Very soon these AI-Moments will connect and automate together.

There is often talk about the Singularity – the moment when machines will surpass human intelligence, and the idea that a single AI machine will achieve this superiority. The combination of AI-Moments does not need a super-intelligent AI, or General AI able to process any problem. It just requires a connected collection of ubiquitous AI-Moments, each replacing a small step of a larger routine. Each applies the rules of marginal gains and they come together to create exponential growth in potential. It may not be the singularity that futurologists predict, but its effect will be similar, as AI-Moments replace human activity in a way that surpasses human insight or comprehension.

This is the Age of With, and AI-Moments are the common units of change.

[1] A-levels and GCSEs: How did the exam algorithm work? - BBC News

[2] UK plans to use AI to process adverse reactions to Covid vaccines | Financial Times (ft.com)

[3] Introducing Microsoft Editor – Bring out your best writer wherever you write - Microsoft Tech Community

[4] This startup is using AI to give workers a “productivity score” | MIT Technology Review

[5] AWS announces AWS Contact Center Intelligence solutions | AWS News Blog (amazon.com)

[6] https://lobe.ai/

[7] As wildfire season approaches, AI could pinpoint risky regions using satellite imagery | TechCrunch

[8] Machine Learning for Kids

[9] Translating news from Chinese to English using AI, Microsoft researchers reach human parity milestone

[10] AI wave rolls through Microsoft’s language translation technologies

[11] https://www.deepmind.com/blog/article/alphafold-a-solution-to-a-50-year-old-grand-challenge-in-biology

[12] Building the AI-Powered Organization (hbr.org)

[13] https://en.wikipedia.org/wiki/Cats_and_the_Internet

[14] Adversarial training produces synthetic data for machine learning (amazon.science)

Read More
Tony Reeves Tony Reeves

AI is the new Language

The technology of the Greek Phonetic Alphabet changed human creativity. Now, speaking the language of software and humans, AI will transform society and the software industry. In doing, AI will become our new language.

The technology of the Greek Phonetic Alphabet changed human creativity. Now, speaking the language of software and humans, AI will transform society and the software industry. In doing, AI will become our new language.

Language is essential to human culture. The fundamental difference between humans and animals is our ability to capture, communicate, and create using commonly understood language. Human evolution accelerated once humans could understand the sounds we make and capture those sounds in writing. Our ability to speak new ideas, write common laws, or read inspirational prose are the critical foundations of our modern global civilisation. 

AI is about to rock our language foundation. 

A comparison with the emergence of the Greek Phonetic Alphabet around 850 BCE is insightful to understanding how AI will now become our new language.

Around 850 BCE, the Phoenicians dominated naval Mediterranean trade, and their central location between Mesopotamia, Egypt and emerging Greek states enhanced their culture and commerce, with their influence extending from Afghanistan to Spain.

At this time, single rulers controlled all trade, contracts, laws, and decisions from their respective courts. Scribes formally recorded all deals, taxes, and judgements from the leader's court. Few people could read and write, and rulers seldom learnt this skill. To argue with a ruler, foolish in itself, was made more complex when few could understand the knowledge that established their rule. Nothing of worth occurred beyond the court walls as the court created all records. Therefore, widespread illiteracy led to the centralised power of the ruler, and scribes empowered this control.

A significant incentive existed to limit literacy. A scribe was well-paid, trusted and respected. Increasing literacy would diminish the worth of scribes' hard-earned literacy skills. In turn, rulers did not need their subjects to read their proclamations, only to obey them as subjects may disagree if they read the details.

Alphabetic complexity also made learning to read and write highly challenging. The Phoenician alphabet was consonantal, similar to all other alphabets at the time. To read, someone had to understand the discussion topic and remember many highly complex consonant clusters. While the consonantal alphabet was a significant leap forward from hieroglyphs and cuneiform, which used hundreds of symbols instead, it still took years of learning even to be basically competent at reading consonant clusters. 

Scribes needed to learn a word's written form and then comprehend the subject matter to translate that written form into actual meaning. The result was years of dedicated learning and apprenticeship before a scribe could capture in writing a trade negotiation, a new law, or an announcement from the ruler.

Between 850 and 750 BCE, the Greeks adopted the Phoenician alphabet and realised that Greek had fewer consonants than Phoenician, as today English has fewer consonants than Arabic. Reducing a complex process, someone then took the leftover letters to indicate vowels.

In Anaximander, Carlo Rovelli describes this moment,

"The many vocalic inflections of the same consonant - ba, be, bi, bo, and so forth - all rendered in Phoenician with the single letter B, could be distinguished as Ba, Be, Bi, Bo, etc.

It may seem a small idea, but it was a global revolution."

Indeed it was.

Greeks created the first phonetic alphabet, making reading and writing child's play. Learning the alphabet enabled someone to write the sounds they made in a way others could comprehend. They could deconstruct the sounds of others by understanding the same letters and joining them together. A sentence like, "A bird flies through the air" could be understood even if they had never seen the word "bird" before simply by saying each phonetic part. Once the reader said it, they would know that B.IR.D meant the word bird. 

The Greek phonetic alphabet was the first technology to enable almost anyone to record, share, edit, and understand the human voice.

The impact of this technology was immense. Anyone could hear the words of their rulers and, in response, share their ideas. Traders no longer needed a scribe to capture their negotiations and could escape the central court. Opinions could be shared, understood, and improved. Secrets could be passed outside the control of the ruler's court. Love letters could be shared. Propaganda could be published. This revolutionary technology empowered democracy, commerce, civilisation, medicine, literature, science, and, in turn, our modern information technologies.

The most significant change was that power was no longer in the hands of the few and could now spread through the reading and writings of the many.

How does this primary history lesson impact AI?

Since 850 BCE, the technology of the alphabet has underpinned nearly every significant revolution. Whilst other languages and alphabets existed, the need for mass literacy, or attempts to limit literacy, has empowered scientific discoveries, political revolution, or religious zealots. Even the language of mathematics, the other core language of modern society, prospered because the concept and ideas could be captured and recorded. We have democratised and made public our knowledge, our education systems have enfranchised everyone to understand our wisdom, and our civilisations have thrived.

The Age of Software Languages

Mass literacy empowered the masses and gave voice to their ideas until the 1970s, when a new language began to appear and evolve, the language of software. This new language ignored phonetics and, again, like cuneiform or hieroglyphic scribes, required specialist training and knowledge to comprehend. Someone without that knowledge could not deconstruct meaning using a handful of symbols; even if they could, it would often be language specific, reducing the detail one speaker could gain from another. 

The language of software rapidly evolved and changed, constantly changing, merging the languages that preceded it and parenting new languages. National critical infrastructure teams have gone through the ordeal of identifying old languages and the people who understood them to address security risks. A coder from 1979 may understand elements of some of today's software languages with their expressions and statements. Yet, even a simple quantum language like Q#, which includes quantum states and operations, would take much work to comprehend. It is, simply, another language.

The world has undoubtedly changed through the language of software. Its influence is felt direct through our interactions with digital devices, one step removed through managing the core services and utilities that power our cities and culture or indirectly through shaping today's societies. It is hard to dispute that we are in an age of software.

The language of software has also created new rulers and empires. MS-DOS founded Microsoft with an operating system that enabled PCs to work in a standard manner translating functions into system activities. Apple created a translator between human interactions and their computers using hardware mice and later touch screens. Google began with software to understand the knowledge of the internet and make it accessible.

The language of software is behind all of these empires, their scribes are well-paid and respected, and their language is not accessible to the masses, even if those masses have access to these empires through controlled portals. Guarding critical source code for crucial programs and, in turn, the valuable IP from that code is one of the highest priorities for any software developer seeking to monetise their code. The protection is not just to prevent security risks but also to guard the very source of their business.

Consider how Twitter/X recently reacted when Meta launched its social media posting solution. Its first response was to claim that Meta poached developers (scribes) and used the code (language) that made Twitter successful. Where that challenge will end or its legitimacy may be debatable, but software companies protect their scribes and languages just like ancient rulers.

AI threatens these technology empires (unless those empires control that AI).

AI is starting to break down these barriers of understanding and competition at an accelerating pace as we are on the verge of a new democratised alphabet.

Nearly everyone has become excited by Generative AI (GenAI), the suite of AI tools that generate and create artistic products like art, words, or music. Most readers of this piece will have experimented with prompting AI to create an image, asking an AI tool to write a summary of a complex document, or even generating a poem about a particular subject.

It is child's play to use these GenAI tools.

As well as capturing our imaginations by producing new content, recent GenAI tools have also broken down another critical barrier around ease of use and access. The interfaces that we use to engage GenAI has democratised access to AI. There is no need to understand data, storage, coding, models, or languages to access AI tools that generate immediate and tangible results. Their prompt interfaces and widespread distribution across devices, platforms and software have placed AI in the hands of the generalist rather than the specialist.

Creating and editing complex illustrations required access to someone with artistic talent and the ability to express the requirement. Now, "/Imagine Greek Philosopher holding a laptop inspiring a crowd" generates the image at the top of this post.

Previously, ease of access has been a significant barrier to adopting any new technology. The phonetic alphabet addressed this by enabling anyone with a canvas and a stylus to write. The results have been written on walls, hides, papers, rocks, metals, wood and screens ever since, but it still took centuries to revolutionise society truly. In 1820 only 12% of the world's population were literate. By 1960 it was 86%. Think of the possibilities if literacy at that level was achieved centuries earlier.

GenAI contains AI as both translator and creator to provide similar ease of access to complex tools and has equal potential to transform our technology usage at a pace measured in months rather than millennia.

An area of particular interest is AI-generated code. Here, GenAI writes code based on other code elements to create a software product. Creating this type of GenAI requires analysing large, existing volumes of code stored in repositories to learn, replicate and mimic software functions and services.

Right now, the outputs are relatively limited. It can generate code to extract data from a website, analyse the content, publish the results, or create simple webpages for human input to generate another related output. 

These programs are currently limited by the length of output generated by the GenAI product, by the number of code snippets drawn upon to create the code, and by the owners of the GenAI service to ensure that expensive compute resources are not consumed building complex programs.

Today's GenAI services still cost significantly in terms of time and cloud resources to deliver large outputs, yet that cost is rapidly falling. As a result, new opportunities will rapidly emerge as AI speaks software.

AI becomes the new language because it can speak the language of software and the language of humans through intuitive interfaces.

This blending of language already translates from idea to output, adds insight to our thoughts, and generates new solutions for our problems. AI can mutually translate native and software languages, improving both. Again, like the Greek phonetic alphabet, AI empowers people to understand and build more. 

It is removing the barriers that protect today's software scribes and empires. 

Rather than AI generating a document, spreadsheet, or presentation, we need to consider what happens when AI can develop the tools that make documents, spreadsheets, or presentations.

Imagine prompting GenAI "to build a word processor" or "create a program that lets me edit financial spreadsheets".

These are different from the questions currently answered by GenAI solutions due to the capacity reasons explained, but they are questions we can pose very soon.

Pace of Change - who shot JFK?

In 2017, I was privileged to witness a Microsoft team develop an AI solution that comprehended and analysed over 36,000 documents relating to the assassination of President John F Kennedy. Around 20 people took eight weeks to digitise the documentation, create an AI suite to understand it and develop another set of tools to analyse and visualise the understanding. It was an impressive feat made possible through intimate knowledge of available AI tools and the team's ability to exploit software languages.

Today, using AI, a single coder could generate a similar solution in eight hours. 

Continuing that pace of change, in another six years, that time could be reduced to 1 minute, even if the rate of evolution and adoption remains the same as in the last six years. We know this is false as progress accelerates exponentially rather than stays constant. We also know that humans agreeing on what they want will take far longer than the machines delivering the request.

However, based on current progress and pace, we will soon use AI to generate complex programs and create personal, unique solutions to generic tasks. We will use AI to custom-build a word processor that works as we wish, adds functions we need, and share it with others to enhance.

Part of me reads these words as pure lunacy. Why would anyone want to create a Word or GMail replacement when perfectly effective solutions and alternatives already exist? Will we continue to use our existing toolsets as we have for decades? 

The answer to why is a mix of personal customisation and a question of cost. On cost, The Software Alliance estimates that software piracy costs an estimated $45bn per year in lost revenue to software developers. This figure indicates that many people want to use software products but are unwilling or unable to pay for them. Many pirated software users would use an AI-generated alternative that was freely available with similar functions. Crucially, many currently paying fees for a software licence or application would be tempted by a freely generated option. 

Few people are loyal to a particular application because of its brand or name. They are customers because it completes a required task with an experience they appreciate. Consequently, software developers include customisation and personalisation features to improve the user experience.

Again, GenAI coding will enable users to introduce levels of personalisation unique to that individual. Backdrops, colours, icons, layouts, and features can all become unique and specific or changed at a prompt's notice. People can remove functions, redefine forms, and crucially embed the same GenAI into their solutions to learn how the application is used and prompt suggestions to improve it. User input to develop new features is vital to good software development. GenAI coding will let users bypass the need to engage with developers and write directly to the application.

Software developers will have to introduce similar features into their products to compete. Still, they cannot compete if users build their own software to save money. Even with computing and development costs, developing a bespoke service with GenAI may be significantly less than the lease or purchase of the generic application.

Building your Own Software empowered through AI (BOSAI), with AI speaking the languages of humans and software, will transform how programs are distributed and used. Once achieved, BOSAI will revolutionise the software industry that has created our AI services.

Today, we still need to reach the point where BOSAI can provide the functions or capacity required to replace large commercial software packages. It is also clearly outside software developers' interests to release a tool with GenAI that replaces their unique skills. Like the ancient scribes, developers face a challenge between empowering everyone with a new literacy in languages against removing their own legitimacy. 

This may prompt slower enthusiasm by some for these toolsets, yet it is also an ideal opportunity for disruptive entrants to the software market. After all, the software industry has been disrupting itself from its first days.

This is how AI becomes the new language.

There was no master plan for creating the Greek Phonetic Alphabet. Whilst people could see the benefits and simplicity of the approach, they did not act with an intent to change the world. They acted to make their daily lives easier to share and their daily chores quicker to complete.

So too, with AI. Software developers have created a tool that can replicate their own language and created a way of accessing that tool that can be shared quickly and simply. What was previously guarded knowledge with high barriers to understanding and use has now become easy to exploit. The master plan was not to change society but to make their daily chores of writing code and creating applications easier to achieve.

Yet, like the alphabet, the implications are immense.

Software has created today's society. Without software, our society struggles to thrive and prosper. AI becomes as powerful as the first phonetic alphabet by simplifying the language to understand and develop software. AI disrupts industry sectors and businesses by providing new opportunities and presenting different approaches that software previously answered.

The ultimate result of AI is to replace the rulers and scribes that established our software society by removing the language barrier between Humans and Software. As the new language, AI can empower us all to speak Software.

With this power in our hands and being child's play to use, what will we do with it?

Read More
Tony Reeves Tony Reeves

Ethical Principles for Artificial Intelligence

Ethical principles for AI adoption

To realise the full benefits of AI, we’ll need to work together to find answers to these questions and create systems that people trust. Ultimately, for AI to be trustworthy, we believe that it must be “human-centred” – designed in a way that augments human ingenuity and capabilities – and that its development and deployment must be guided by ethical principles that are deeply rooted in timeless values.

At Microsoft, we believe that six principles should provide the foundation for the development and deployment of AI-powered solutions that will put humans at the centre:

  • Fairness: When AI systems make decisions about medical treatment or employment, for example, they should make the same recommendations for everyone with similar symptoms or qualifications. To ensure fairness, we must understand how bias can affect AI systems. 

  • Reliability: AI systems must be designed to operate within clear parameters and undergo rigorous testing to ensure that they respond safely to unanticipated situations and do not evolve in ways that are inconsistent with original expectations. People should play a critical role in making decisions about how and when AI systems are deployed.

  • Privacy and security: Like other cloud technologies, AI systems must comply with privacy laws that regulate data collection, use and storage, and ensure that personal information is used in accordance with privacy standards and protected from theft. 

  • Inclusiveness: AI solutions must address a broad range of human needs and experiences through inclusive design practices that anticipate potential barriers in products or environments that can unintentionally exclude people. 

  • Transparency: As AI increasingly impacts people’s lives, we must provide contextual information about how AI systems operate so that people understand how decisions are made and can more easily identify potential bias, errors and unintended outcomes.

  • Accountability: People who design and deploy AI systems must be accountable for how their systems operate. Accountability norms for AI should draw on the experience and practices of other areas, such as healthcare and privacy, and be observed both during system design and in an ongoing manner as systems operate in the world. 

Read More