Using AI to save time and money, hopefully
Using AI to save time and money to replace old software is not new. Still, DOGE and the US Social Security Administration are about to run the largest experiment in this area on the planet. It may work, though the odds are not in their favour.
Midjourney v6.1 Exploding AI Bubble
Using AI to save time and money to replace old software is not new. Still, DOGE and the US Social Security Administration are about to run the largest experiment in this area on the planet. It may work, though the odds are not in their favour.
DOGE is starting to assemble a team to migrate the Social Security Administration’s computer systems entirely off one of its oldest programming languages in a matter of months, potentially putting the system’s integrity and the benefits of 65 million Americans at risk[1].
Their target is a massive computer system that provides the backbone for SSA operations, from assessing claims to paying benefits. The system is ancient, with new subsystems bolted into it over decades. Written in COBOL, a programming language used on mainframe computers dating back to the 1960s, a replacement is long overdue.
The current system is costly to run, hard to maintain, difficult to understand, and creates errors. Replacing it has been on the SSA priority list for years and has always been seen as a massive undertaking in terms of code development and migration effort over several years, not several months.
DOGE’s unconfirmed approach is to use AI to understand how the SSA software tools work and then use more AI to replace that software with a better version. The resulting system will use even more AI to assess claims, speed up SSA processes, enable quicker development of new subsystems, and ensure correct payments to claimants. All this decreases the risk of fraud or error and the need for staff to manage the process.
A simple example maybe based on real life
Using AI for these tasks is not a new idea. We deployed similar concepts in the 2000s, replacing human processes and archaic code with automated inference machines. Those efforts did not work, yet the lessons still apply today. By coincidence, Anthropic has just released research on this topic, showing that AI, after over 25 years of research, can still be a problematic tool to understand or use.
So what can go wrong based on trying this before? Let’s set up a simple example that is not real and used for purely illustrative purposes. But all of this story, in a related world, is true.
A bank branch transfers the same sum daily from a single account to 10 others. That branch manages all the accounts, so they have the same identifying six-digit sort code, and all accounts use eight-digit account codes. The teller enters the sort code once, then the paying account code, the ten account numbers, and inputs the sum for each account—let’s assume $100.
An old computer running old code processes this action over several hours. The computer looks up the accounts and transfers $100 from the paying account to those accounts. The payment must be made that day, yet there is no immediate rush to speed up the process as long as it finishes by 5 p.m. If there are any errors, the bank teller gets an alert, examines the account details, and manually makes any changes.
Everything works perfectly fine. The teller sets up the payment each morning, checks that they are progressing at lunchtime, and fixes any errors before they depart for the evening. The result is 11 happy customers – the person paying and the 10 people receiving $100 daily. Plus, the teller feels their job has helped 11 people by merely entering a few numbers and checking the odd error.
In setting up the system, the coders made a few simple decisions to make it effective and efficient. First, these transactions are still processed centrally on a mainframe computer to ensure that records are kept and reduce the risk of fraud in a single branch. So, communication between the branch system and the central mainframe must be in a similar language.
Then, before international enterprise-scale organisations deployed global software tools, a handful of people in the firm conducted the development of these tools. A small team would write the code and discuss new features. Most of this was written before Agile as a concept existed and when recording information about what a routine did was quite rare. The people who built the code also ran it and often sat next to the machine running it, so if anything went wrong, they knew how to fix it.
Co-location meant that fixes could be done quickly, often over the phone, while a developer worked through the issue. It also created an added bonus for the developers by creating a job for life. The programmer had to be kept happy if you wanted the program to run. Most organisations followed this model, but it created lines and lines of code that only a very few people understood.
When computers began popping up in other locations, these same developers would copy part of their code onto those local machines and let it run. Doing so allowed them to distribute code they knew would work with their central system with little effort.
Testing was often done in-house and by hand, so developers would take time off writing new code or supporting current code to run tests. In the 1990s, big software companies properly tested their code, but in-house teams would make sure that it produced the answer that they expected and then distribute it. After all, if it went wrong, they were just a phone call away to fix it.
Errors, errors, errors
Back to the bank teller, happily entering 10 account numbers in the morning and checking progress between helping other customers in the bank face to face. It was still a time when people went into banks in real life for most of their banking needs.
The tellers would notice that errors were rare but would often be similar. For instance, there is a very, very small probability (1 in 100,000,000) that two account numbers could be the same. Still, it is highly unlikely that sort codes would also match (equivalent to matching two human hairs from a line of hairs stretching 7 million kilometres). There is also a possibility that the connection would drop between the branch and central mainframe during a transaction, which was much more likely.
The highest technical risk would be that the branch computer crashed from getting too hot, a power cut, or just a bug in the computer. Tellers would enter numbers in small batches to avoid these errors and let them run. Then, if the connection failed or the computer crashed, they just had to enter 10 accounts rather than hundreds at a time.
Of course, the most significant errors were caused by humans: selecting the wrong account numbers for payment, entering the wrong account numbers into the machine, not checking that payments were completed, or writing errors into the code and not testing it.
Another quirk of old systems involved security. Installing an application on a works device is difficult today. Back then, most systems were open if you knew what you were doing, and code was relatively easy to hack.
Consequently, local users added local code to systems. A smart bank teller could access their computer and add a new, local routine. This was much quicker than calling the head office to ask for a new feature, which often resulted in being put through to the coding team, who would be too busy doing their own thing rather than adding user-requested features.
Tellers could add features that check account numbers. If they entered the same sort code every day, they could add a quick routine that filled this out automatically. They could store the accounts to be paid daily on a separate list and look that up rather than manually add each account. Users could set up message notifications. In one case I know of, people would use the equivalent of an account entry box to communicate with other tellers in a primitive routing tool using sort codes. Today, this seems very wrong. Back in the 1990s, hacking was just part of the job.
The central office was too busy to check if this was happening, and where it produced errors, these would be locally contained and managed. Plus, the machines were relatively slow, and multiple people were involved in the actual process, so total errors were few.
AI Bubble of 1990s
Now, let’s introduce the AI Bubble of the late 1990s. Inference machines were all the rage, and inferring information within data sets offered huge gains. One example from our fictitious story would be to use inference to reduce the number of digits processed, which appeared a simple task. In the sequence 12345, the next probable number would be 6. If communication was lost and the sequence was 12?45 then the inference machine would assume the missing number is 3, especially if it had previously seen 12345 regularly.
Computers ran slowly, especially with extensive lists of numbers. If you could halve the number of digits used by only using the last four digits of an account number and assuming that the odds of the same number were still 1 in 50m, then the machine could process those accounts almost twice as fast. If you could infer the missing digits when communications dropped, then you didn’t necessarily need to reconnect and error check; you could proceed as if the connection had remained in place. All these small inferences would save time, and when compute power and storage were costly (read, before the cloud), every digit counted.
Plus, bank tellers still checked the payments each night, and, as a last resort, customers would come into the branch and ask about their missing $100 payment.
Initially, like DOGE, we looked at how inference machines could replace the code. Could we port it from an ancient coding language and make it more effective and efficient? Could it be a rosetta stone for ancient languages?
Alas, with colossal code bases of jumbled routines and sub-routines, without any explanation as to what it all did, and with the original coding teams either retired or, more likely, unwilling to assist in removing their job for life employment managing the code, this approach was doomed.
What about improving the process and using inference machines at key points? Everyone knew that local branches were running their own code, often to improve their specific work, and helping with that could be beneficial. Looking at what they were doing, several patterns emerged: regular activities with similar information that could be repeated and replicated. AI thrives on repeatable patterns, after all!
At the same time, computer hardware was massively increasing in performance and significantly dropping in price. It became possible to start running inference routines locally and, with the emergence of a more stable internet, centrally collect local data more reliably. Rapidly, it became feasible to reduce the work done by local tellers, centrally run processes at hugely increased rates, and collate error detection with a smaller team centrally managed. This centralisation would allow more tellers to conduct work face-to-face with customers and reduce the security risks of local branches running unauthorised code.
For comparison, rather than running 100 payments daily, these changes enabled 1000 payments — every second. That speed increase also raised our fictional model’s daily error cost rate from £10,000 to over £28 million in just 8 hours.
Of course, it went wrong
First, errors are relative to the number of processes run and the completion time. 100 payments over 8 hours with a human checking at the start, middle, and end of that activity would reveal few errors and be quickly manually fixed. Even at a slow rate, 100,000 processes per day would naturally reveal more errors. Even if 1 in 100,000 processes created a mistake, then this immediately became a daily activity rather than something seen once every three years.
Doing more things simultaneously also allowed new things to be attempted. Payments could be scheduled, multiple account transfers could be linked, and transaction recording could be simplified. Suddenly, that legacy code that had been super accelerated really didn’t look that great. Unstructured, poorly documented, incrementally built code had a lot of errors within it just waiting to break free. With many humans in the loop and at a slow rate of discovery then these could be fixed by calling up the central developer team and having a chat. With thousands of errors suddenly being unleashed at once, the developer team became swamped and unable to respond. Plus, morale plummeted as all the bad coding practices they previously ignored started returning to the team.
More importantly, the AI started to show weird errors, not all bad. AI unearthed oddities that were previously missed but then appeared obvious when seen. Odd quirks in the system became apparent. For instance, some account numbers would appear again and again. Often, this would be genuine fraud or crime, with money being syphoned off without permission.
Other times, the inference routines would get stubborn or just simply grumpy. Rather than predict 6 for 12345, they would create 1 or 9. Why? Genuinely, people found it hard to work out, an issue that continues with AI today[2]. Data analysis would point to Newcom-Benford Law for Anomalous Numbers[3] as a possible cause for random numbers not being random, but the inference would still sometimes act oddly[4].
Overall, the inference systems failed. It accelerated too fast, did not truly understand the base code, had no way of addressing unknown errors within the system, removed the humans that made the systems work (the tellers), and annoyed the humans tasked with fixing the system (the developers). Most importantly, customers suddenly became grumpy when machines failed to complete simple transactions. All because a clever AI was being used well beyond its limits.
Top Three Recommendations for turning AI loose
What would be the three top recommendations before DOGE turns AI loose on SSA?
Spend more time on the base code to understand what it is doing. AI can help here, but it still cannot replace human insight. The system is probably not doing what it says it is doing, AI is not doing exactly what it says it is doing, and AI is unable to determine whether that is an intended or unintended consequence. In short, AI doesn’t know why it made a decision.
Test the current system and use AI to run those tests. Test with extreme corner cases and larger data sets than expected. If you cannot test on massively scaled data, run on minimal data until you can test it at scale. Sixty-five million humans is not a test data set. It’s their lives.
Include the people who built the machines and operate the processes. They know how it all works, where the issues really exist, and the workarounds that are employed to get the job done. They are the actual hackers of the machine and need to be involved rather than excluded.
Anyone can break things, but breaking things and making them better needs experience of the actual thing being improved. Years after my experiences breaking things and not always making things better, I heard a German phrase, “die verschlimmbesserung”. This phrase is my favourite from a nation famous for using single words to express an entire sentence. It describes an action that is supposed to make things better but ends up making it just a little bit worse.
Misused AI can be the epitome of this sentiment. It is well-intentioned but not always understood, promising huge steps to improve things but resulting in making things worse. People are just trusting AI to improve their lives, but it’s delicate. Getting an AI deployment wrong at the scale proposed in the SSA might not just impact 65 million humans who desperately need the payments that SSA provides; it may tar the whole deployment of AI and create another AI burst bubble.
[1] DOGE Plans to Rebuild SSA Code Base in Months, Risking Benefits and System Collapse
https://www.wired.com/story/doge-rebuild-social-security-administration-cob
https://www.wired.com/story/doge-rebuild-social-security-administration-cobol-benefits/
[2] Anthropic recently published a paper on this phenomenon: Circuit Tracing: Revealing Computational Graphs in Language Models’
[3] Benford’s Law: Explanation & Examples
[4] Better explained here ‘AI Biology’ Research: Anthropic Explores How Claude ‘Thinks’
[5] DOGE’s new plan to overhaul the Social Security Administration is doomed to fail
Language barriers remove humans from the loop, AI smashes AI-proof tests, and using AI atrophies human cognitive functions
Midjourney v6.1 AI talking to AI at fast pace, humans ignored
The adoption of AI in the workplace is proceeding at pace whether companies want it or not. The FT highlights the trend of people paying for AI tools to help at work, At work, a quiet AI revolution is under way, and highlights how AI is helping many people write better or express themselves clearer. It asks does it matter if a birthday note is penned by AI or is it still the thought that counts?
Google reveals “Co-Scientist” that could lead to science breakthroughs, Google reveals ‘Co-Scientist’ AI it says could lead to huge research breakthroughs | The Independent in a great demonstration of how AI is working alongside people to enhance human ingenuity.
Do you rely upon SatNav to get you from A to B when driving? Microsoft and Carnegie Mellon research found that where automation deprived workers from the opportunity to use their judgement it left their cognitive function “atrophied and unprepared” to deal with anything beyond the routine. Microsoft says AI tools such as Copilot or ChatGPT are affecting critical thinking at work – staff using the technology encounter 'long-term reliance and diminished independent problem-solving' | ITPro A potential worrying challenge for military officers expected to use critical judgement in high pressure moments.
OpenAI has released a new model, o3, that has smashed traditional AI performance metrics The Dawn of a New Era: OpenAI’s o3 Model Surpasses the Best of Us. Key measures show it achieved:
25.2% on FrontierMath, a collection of ultra-hard problems that stump even professional mathematicians. Previously AI scored 2%;
96.7% accuracy on AIME Math Test, correctly answering 14 out of 15 questions. Exceptional humans consider 10 correct answers as notable.
87.7% at Graduate Level Science in the GPQA Diamond, with PhD Experts scoring around 70% in their field. Typically, AI has failed in this examination which is “Google proof” involving novel problems and reasoning rather than memorised knowledge.
2727 Codeforces ELO score in competitive coding, putting o3 in the top 200 coders globally and achieving higher scores than the team that built the model.
Reasoning score of 88% against the ARC-AGI Benchmark, an assessment created to prove the limitations of AI with the test assessing logic, reason, and intuition. Typical humans score 70%.
The last score, ARC-AGI, is the most impressive. From 2020 to 2024, LLMs struggled in this assessment, and only achieved 4% by 2024. By the end of 2024 it had creeped up to 35%, but in just 3 months it has achieved 88% and almost making the test no longer relevant as a way of proving the difference between humans and AI.
Finally, as AI increasingly sees humans as the slow point in their progress, one team has developed a way for chatbots to interact with each other at a faster pace Two AI chatbots speaking to each other in their own special language is the last thing we need | TechRadar Science fiction has long suggested that robots would not use human language to communicate as it was too slow (think R2D2 in Star Wars) and this could be the first step in faster AI to AI communications. The human could be removed from the loop as they no longer talk the language used in the loop.
AI Update: Billions and Bytes - The UK's Half Full Cup in the Global AI Race
The UK AI plan launched by the PM and its need for money is covered here, UK has half of what it needs to be an AI hub, and the cost of AI is a bit of a theme this week. Basically, the UK is good at unicorns but poor at scaling. Defence is also guilty here, it scales poorly but could be a major player on the investment and funding aspects.
Midjourney v6.1 the UK’s half full cup in the global AI race
The UK AI plan launched by the PM and its need for money is covered here, UK has half of what it needs to be an AI hub, and the cost of AI is a bit of a theme this week. Basically, the UK is good at unicorns but poor at scaling. Defence is also guilty here, it scales poorly but could be a major player on the investment and funding aspects. The potential benefits, and how AI can help build a better society, are well covered here, How we can use AI to create a better society.
The cost of AI, especially environmentally, is blown up here Using ChatGPT is not bad for the environment where it points out that a ChatGPT query uses the same energy as 2 emails (and globally there are 160Bn spam emails sent EVERY DAY), that a hamburger uses 660 gallons of water to produce, one hour of US TV consumes 4 gallons, and 300 queries uses 1 gallon. If we’re going to reduce energy or water consumption, there are more beneficial (and easier) areas to focus on. Leaking pipes in the US wastes 10,600M gallons of water every day. 😮
A quick update on how industry are using GenAI covered here State of Generative AI: 5 Takeaways and 5 Actions to Help Drive Progress - WSJ with 40% of organisations saying that they are already achieving their expected benefits in this area.
Finally, Stargate and the $500Bn US investment is huge news, Trump announces private-sector $500 billion investment in AI infrastructure | Reuters provides a good summary, and this is mostly centred on Texas. The Guardian neatly summarises the subsequent spat between Musk and Altman here Tech titans bicker over $500bn AI investment announced by Trump | Elon Musk | The Guardian with Satya Nadella politely just saying, “All I know is, I’m good for my $80bn. I’m going to spend $80bn building out Azure.”
The launch of AI as a Global Strategic Weapon
In 2025, AI will be recognised as a strategic weapon alongside cyber warfare. The development of AI capabilities has progressed in half the time it took to transform cyber capabilities into practical military tools. This rapid advancement means military leaders must quickly enhance their understanding of AI before it is used against them.
Midjourney v6.1 The world as an AI GPU
Entering the year of w-AI-rfare and the snake
In 2025, AI will be recognised as a strategic weapon alongside cyber warfare. The development of AI capabilities has progressed in half the time it took to transform cyber capabilities into practical military tools. This rapid advancement means military leaders must quickly enhance their understanding of AI before it is used against them.
The popular view of weaponised AI is Terminator robots, evil supercomputers, or new super-lethal drones. These are all possible but tactical in nature. AI is now strategic, and in January 2025, we witnessed three significant deployments of AI as a weapon of strategic power.
President Biden conducted the first strike on 13 January, restricting access to GPUs, the core component for training and deploying AI capabilities[1]. The restrictions are complex but aimed at crippling adversaries developing AI and bolstering the US leadership in this field.
Controlling Global Compute and Processing
Core components of the announcement restrict global hyperscalers like Microsoft, AWS, or Google in terms of how many AI data centres they can deploy outside the US. They must now put 50% of their global compute capability into the US. Second, 18 allies were granted permission to purchase GPUs for large-scale data centres in their countries. These countries included the United Kingdom, Holland, and Belgium but excluded NATO allies like Poland. Excluded countries, adversaries or allies would be massively restricted or prevented from acquiring the necessary computing capability to develop or significantly deploy AI capabilities.
This first strategic move sought to maintain the US's global dominance in AI development and deployments. It followed two previous sanctions lists in 2024 that targeted China specifically, although more on China’s response later.
The second strategic launch of AI was made by new President Trump on 22 January, with the announcement of at least $500bn investment in AI under the name project Stargate[2]. This investment is led by Japanese giant Softbank, ChatGPT Creator OpenAI, cloud giants Oracle and Microsoft, and NVIDIA, which supplies its processors.
Investment double the size of the Apollo Programme
In terms of cash investment and adjusting for inflation, this sum is almost double the Apollo Space Programme in real terms. It includes $100bn to construct an AI supercomputer for accelerating artificial general intelligence, the AI holy grail that outperforms humans in thinking[3].
It is an incredible financial investment that no other country can approach, all focussed on true technology world leaders. The outcome was also straightforward: secure American leadership in AI.
In comparison, the UK AI Opportunities Action Plan[4], launched the week before, committed £14bn ($US17.5bn) to AI research and development. This is not an insubstantial sum, but the UK is the fourth largest global investor in AI[5], and the UK plan is over ten years. The US plan is over four years and spends nearly 30 times more.
Combined, these two US policies restricted their allies and tied them to significant US dependency, at least for AI compute capability. The UK Action Plan recognised this constraint: “The UK does not need to own or operate all the compute it will need.” However, the plan also recognises that it will require compute capability that is both Sovereign (owned and operated by the public sector for independent national priorities) and Domestic (UK-based and privately owned and operated). Building that capability will now require US agreement and, probably, US partnerships.
For US adversaries already starved of AI computing capacity, the two US announcements significantly raised the bar in terms of investment that they would need. The announcements created a global shortage of GPUs outside the US, ensured that significant numbers of processors would remain in the US, and probably raised the cost of acquiring GPUs due to the consequent global shortage.
Chinese Economic and Political Counter Moves
At this point, China deployed AI as an economic strategic weapon. DeepSeek AI[6] is a large language model (LLM) and a generative AI like OpenAI ChatGPT. The model itself, R1, posts impressive capabilities on par with the latest public versions from Google and OpenAI in reasoning terms.
Developed in November 2024, public awareness was conveniently timed to coincide with President Trump's inauguration. Public headlines declared China’s Sputnik moment, referencing how the USSR surprised and threatened the US with its first satellite launch.
Launching the model was not a strategic strike. The blow came when it was revealed that the model was trained on a limited number of A100 NVIDIA chips for a rumoured $6m. In comparison, OpenAI trained its latest version on 2,500 A100 chips and spent between $32m and $63 m in training costs[7]. A significant potential saving in terms of cost and energy.
Suddenly, China had an AI capability that could be trained on fewer chips and for 10-20% of the cost. The stock market, especially the US NASDAQ, panicked. In one day, NVIDIA's market value dropped by $600bn, losing 19% in share price, becoming the biggest drop in US stock market history[8]. The NASDAQ Composite Index dropped 3% overall. Softbank, one of the biggest investors in Stargate, dropped 8% in value, wiping all the gains seen since the ambitious project was announced.
A few days later, it became more apparent that DeepSeek R1 was not all it appeared to be. Its training has used massive amounts of OpenAI and Anthropic data and probably used those LLMs to improve and refine its capabilities. It is also true that technological advances make new deployments easier. In this case, China developed an equivalent capability almost 12 months after the first US equivalent version was released and trained it using those US models. This would naturally cost less computing than training a new model from scratch.
However, a simple AI was released to damage US investment and economic confidence in AI. This was a genuinely strategic deployment of a small asset.
The other strategic impact of the DeepSeek R1 announcement was timing. Coincidentally, it was the same week President Trump took office, two weeks after the US announced new restrictions on the sale of GPUs globally, but especially in China, and one week after Stargate launched. DeepSeek announced that it had stockpiled the A100 GPUs needed for its training because of 2022 sanctions limiting their sale to China. In truth, they had to train on limited compute capacity as it was all they could muster together.
The Biden announcement in January makes it even harder for China to acquire these old-generation chips, never mind NVIDIA’s newer H100 units. It introduced restrictions on countries that may have bought large quantities and resold them, possibly ending up in locations that cannot buy them directly. The January announcement makes training harder and almost impossible to deploy next-generation AI without US permission.
Consequently, China announced that it can create competent AI using old chips the same week as the Stargate Project was launched and Trump became President. Trump quickly shared the news with glee, saying it was a wake-up call for US technology[9].
It is highly probable that the strategic deployment of R1 was to shake economic confidence in AI and use political posturing to encourage one person to change policy - President Trump to remove or replace current GPU sanctions. It’s a hard sell, as Trump seems to love tariffs and sanctions, but he also hates losing, so if he thinks sanctions aren’t winning, he may remove them.
China has first Alan Shepard Moment
At the moment, China has caught up with the US on AI. The talks of Sputnik[10] seem to ignore that OpenAI already launched the equivalent of Sputnik in November 2022 with public ChatGPT. If anything, China has finally gotten a rocket into space, but headlines like "China has first Alan Shepard moment" just don't generate clicks.
Yet it is not the weapon of AI that is of interest; it is its strategic deployment. The US has weaponised computational power and access to processors to maintain its economic advantages through AI and, subsequently, the military advantages that AI could bring. In return, China politically and economically deployed AI to damage both confidence and investment, possibly achieving its strategic goals of relaxed processor control.
The rest of 2025 will likely see further strategic uses for AI. The next will be deploying the next generation of large language models, with a probable step change in capability from current versions. Those releases, all coming from US companies, will set a new standard that will be hard to match with models trained on a stockpiled stack of legacy processors.
That release will also prompt China to react and counter differently. It will probably look at exploiting its grip on rare earth minerals, which are essential for the tools and devices that exploit and enable AI. Alternatively, it may increase its economic ownership, where it can, within the AI supply chain.
Yet, twenty days into 2025, on the day China celebrates its new year, it is clear that, like cyber, AI is now both a strategic asset and a weapon.
[1] US tightens its grip on AI chip flows across the globe | Reuters
[2] Announcing The Stargate Project | OpenAI
[3] Sam Altman Wants $7 Trillion - by Scott Alexander
[4] AI Opportunities Action Plan - GOV.UK
[5] https://www.europarl.europa.eu/RegData/etudes/ATAG/2024/762285/EPRS_ATA(2024)762285_EN.pdf
[6] What is DeepSeek - and why is everyone talking about it? - BBC News
[7] GPT-4's Leaked Details Shed Light on its Massive Scale and Impressive Architecture | Metaverse Post
[8] What is DeepSeek and why has it caused stock market turmoil?
[9] DeepSeek a 'wake-up call' for US tech sector - Trump
[10] Chinese AI DeepSeek jolts Silicon Valley, giving the AI race its 'Sputnik moment'
DeepSeek R1 Review and Quick Study
A fast and informal understanding of the Chinese R1 DeepSeek Large Language Model. This will be developed into something more substantial shortly.
A fast and informal understanding of the Chinese R1 DeepSeek Large Language Model. This will be developed into something more substantial shortly.
China has created an LLM that is on par with similar US offerings. It has done this on lower compute capacity than is required to build similar LLMs[1]. However, the solution does look to have lifted heavily from US models, even generating OpenAI and Anthropic references when tested[2]. This is probably because R1 was trained on OpenAI and Anthropic data, which leads to an interesting chicken-and-egg dilemma when building new models[3].
The timing of the announcement probably says more about why this is suddenly big news. The R1 model was developed in November, publicly announced, and released to markets in December/January. It is big news in the West around 24 January, coincidentally the same week President Trump takes office and two weeks after the US announced new restrictions on the sale of GPUs globally, especially in China.
These restrictions followed tighter restrictions on Nov 24 (the same time DeepSeek released its last press announcement about stockpiling and using A100 chips). These two sanctions measures are only now starting to take effect[4].
So China announced that it could create good AI using old GPUs the same week as Trump became President, and Trump quickly shared this news with glee, saying it was a wake-up call for US Technology.[5]
Trump is right that this is a positive, as doing similar things on less compute is precisely what Moore's Law tried to predict. We should expect to do more with the same or less as knowledge increases. It certainly helps that Google/OpenAI/Anthropic demonstrated how to do it and shared so much content to help develop similar models.
BUT what about the next generation? Educated guesses are that OpenAI GPT4 used 25,000 A100 GPU to train over 90ish days[6]. This had a significant energy consumption, around 50 Gigawatt-hours of power[7]. Estimates are that each new LLM version costs 30x in processing and energy, so the next version might need 1500 Gigawatt-hours[8].
Little in the R1 or DeepSeek approach fundamentally challenges these assumptions. According to some DeepSeek research, there may be a marginal reduction in A100 usage or fewer H100 used. It is also probable that other companies, probably the US, will learn to use fewer GPUs to train their following models. Necessity is the mother of invention, and needing $7 trillion certainly requires innovation. However, the next generation of AI will still be costly and burdensome to train.
Ultimately, this looks more like political posturing encouraging one person to change policy—President Trump to remove or replace current GPU sanctions. It's a hard sell, as Trump seems to love sanctions but also hates losing. So, if he thinks the sanctions aren't winning, he may remove them.
At the moment, China has caught up with the US on AI. The talks of Sputnik[9] seem to ignore that OpenAI already launched Sputnik on November 22 with public ChatGPT. If anything, China has finally managed to get a rocket into space, but I guess headlines like "DeepSeek has first Alan Shepard moment" just don't generate clicks.
Now, if China launches a GPTv5 equivalent ahead of the US, manages it on a fraction of the necessary compute, and doesn’t just copy existing US models, then I may be worried. And just in case that happens, and especially for today, have a prosperous and joyful Year of the Snake!
[1] DeepSeek: What You Need to Know - Gradient Flow
[2] OpenAI Accuses DeepSeek of Intellectual Property Theft - Geeky Gadgets
[3] DeepSeek R1 struggles with its identity – and more • The Register
[4] US tightens its grip on AI chip flows across the globe | Reuters
[5] DeepSeek a 'wake-up call' for US tech sector - Trump
[6] GPT-4's Leaked Details Shed Light on its Massive Scale and Impressive Architecture | Metaverse Post
[7] Using ChatGPT is not bad for the environment
[8][8] Sam Altman Wants $7 Trillion - by Scott Alexander
[9] Chinese AI DeepSeek jolts Silicon Valley, giving the AI race its 'Sputnik moment'
Playing Pool with a Robot. Who wins when AI takes the shot?
Imagine you’re playing pool with a friend.
The rules are the same as usual, and you are playing against a good friend who is highly competitive. Much as you like your friend, they really like winning and will seldom let you forget if you lose.
Winning is, therefore, a big thing for both of you.
So, to help you win, both of you agreed to have an AI robot helper. This robot AI can make shots on the table with its clever robot arm, play pool as well as you, learn every time it takes a shot to get better and watch you and your friend take their shots. The only restriction is that you can use the AI every other shot and not all the time.
Both you and your friend alternate turns to take shots, first you, then them, then your robot AI, then their robot AI, before returning to you. After a while, you realise that your robot AI is particularly good at getting out of tricky situations and can play shots you usually miss when stuck behind another ball.
Now, towards the end of the game, it is very close. Your robot AI player has got you out of several tricky situations, and you’ve been able to pot most of your target balls. Your friend has your cue ball stuck in a corner, but if you can make the shot, you will win the game. But if you miss it, your friend will very likely clean up the table. Already, they are looking unbelievably smug at how they have snookered you. Surely you cannot let them win and face further smugness?
It’s your robot AI’s turn to take the shot and possibly win the game. Do you let them take it?
Of course, many of us will read this story and believe, quite honestly, that they would stride forward, push the robot to one side, play the shot of their lifetime and win the game. Hurrah! We like to see ourselves as winners, but letting a robot win on our behalf doesn’t really feel like winning.
Yet, facing your friend’s smug face, many of us would let the robot AI take the shot. We would tell ourselves that it’s the robot’s turn and that we’ve played as a team the whole game. If it were the other way around, of course, we would take the shot, but just this once, it looks like the robot AI will win, and that’s precisely within the rules of the game.
We would hand over to the robot AI exactly how game theory would suggest we behave.
The Game Situation
Now, let’s change the narrative slightly. In this game, your friend has let their AI play every shot. They are so smug and confident that they believe their AI can beat you without your friend taking a single turn. So far, you have stayed in the game and now find yourself in the same situation. You are in a tricky position; you must make the shot to win the game, and it’s your robot AI’s turn to play the shot. Your friend and even their robot AI are now looking at you smugly. Would you still stride forward and take the shot, or let your robot AI, with a slightly higher probability of success, take the shot for you?
In this scenario, when playing against a robot AI on behalf of your friend, most people will let your robot AI take the shot and cheer loudly as the ball rolls into the pocket, and you win the game. You would feel no shame in letting your robot AI take its turn as, after all, your friend’s robot AI has taken every turn for them. Take that, smug friend and their even smugger robot AI!
These situations are precisely where we find ourselves in the adoption and deployment of AI. For the last decade, there have been well-intentioned and deeply considered reports, studies, conferences, and agreements on AI’s ethical and safe adoption. Agreements, statements, and manifestos have been written warning of the dangers of using AI for military, police, and health scenarios. Studies on the future of work, or lack of it, abound as we face an employment market driven by AI.
We would all agree that any AI that harms, damages, restricts or prevents human ingenuity or freedom would be a step in the wrong direction. We are all on the side of ethicists who say that death at the hands of a killer robot should be banned. We would all want to ensure that future generations can work with dignity and respect for at least a liveable wage.
We would agree until competition enters our lives.
Facing the AI Dilemma
Our brave adoption of ethical AI principles may face a more substantial test of winning or losing. At that point, would we remain ethical if losing looks likely? Would our ethical principles remain if faced with defeat by an enemy using AI to win? Applying game theory to these scenarios indicates a bleak future.
We all know that collaboration and working together is often better for us all. It is the basis of modern civilisation. We may also know that tit-for-tat with occasional forgiveness may be a better-winning strategy than cooperation, but only if we can trust the other person to vary their pattern. Suppose the other player always goes for the negative but advantageous choice. In that case, any competitor will likely follow the same harmful path.
We can see where this approach is heading if the favourable option is partial AI adoption and the negative is complete AI replacement of a service or function with AI. For instance, an organisation or company that fully adopts AI in their call centres will seize a competitive advantage over others who retain human operators. Their competitors may argue that retaining employees is the right thing to do, that customers prefer real human interaction even if they struggle to tell the actual difference, or that there is a brand advantage to retaining humans in their service.
Ultimately, though, the AI service will prove significantly cheaper and, based on current deployments, will provide at least as good a service as human call centres for most scenarios. Customers of recent AI call centre deployments show that they cannot tell the difference between humans and AI. Faced with such a choice, will a competitor keep the equivalent of playing their own pool shots or hand over part of their service to a robot AI?
The immediate impact is faster service as the wait time for an operator disappears and a significant reduction in operating costs. The business may use these savings to create better services and products, invest in reskilling call centre staff, or use them as profit and reward.
In the longer term, the impact of this simple game decision, scaling across call centre companies, would be much bleaker.
When 4% of the UK workforce, around 1.3m people, work in call centres, we can begin to feel the impact of such a decision. In the region of Newcastle, North East England, there are 178,000 call centre employees, primarily female. These workers are often the only income earners for their families after the manufacturing sector in the area collapsed due to cheaper foreign alternatives and manufacturing automation.
A few companies may avoid this movement and value their human employees more than their profits, revenue, or shareholder return. They may ensure alternative employment is found for their workers or provide reskilling initiatives. The reality is that most will be unemployed, with few transferable skills, and in a region with the highest unemployment rate in the UK.
The AI Adoption Conundrum
When faced with a snooker game where one company adopts AI, the competitors will quickly follow suit. Unlike previous automation waves, like in car manufacturing, which took place over several years, this wave will be quick. Service sector work, especially work centred on information and data, can be quickly automated in months rather than years.
That timescale does not allow the creation of alternative jobs, new skills to be learned, or employment opportunities. It is at a pace that most people would struggle to comprehend or manage, let alone emerge from with better prospects.
The impact rapidly expands beyond the individuals struggling to find employment. The first impact would be a surge in benefit claims, increasing government expenditures. At the same time, employees’ income tax payments would vanish, and their employer’s national insurance payments would cease. Costs would rise, and tax income would decrease. Local expenditures would diminish, impacting other businesses and employment.
This pattern is seen in the Northeast and similar cities like Detroit, US. In the 1980s, heavy industry and manufacturing collapsed. In 1986, as an example, Newcastle was reeling after the Royal Ordnance factory closed (400 jobs), two coal mines closed (2000 jobs), shipyard closures (3000 jobs), British Steel mills (800 jobs), NEI Parsons (700) and Churchills (400). Over 7000 unemployed in 12 months, and these were just the large employers. Countless small businesses also closed at the same time.
The consequences of these closures in the Northeast were decades of stagnation until service industries like call centres eventually moved into the region and employment picked up once more.
Now, we stand at a similar crossroads. Still, this time, the threat of AI-induced job displacement looms over the very service sectors that once revitalised communities. The allure of efficiency and cost-cutting is undeniable. Still, the human cost could be far steeper than any short-term gain. Just as in the pool game, where we were tempted to hand control over to a robot AI, in the real world, businesses and individuals may find themselves willing to let AI take over vital tasks for the sake of winning in a competitive market.
However, unlike in the game, the stakes here are much higher. While letting the AI play might win you a single match, relying too heavily on AI in society could unravel the social fabric of entire regions. The ripple effects of AI replacing human workers are not confined to immediate job losses; they extend to the erosion of livelihoods, communities, and human dignity.
The lesson from our robot AIpool game is clear: the allure of short-term victory should not close our eyes to the long-term consequences. Winning at any cost, whether in a friendly match or business, often leads to outcomes that benefit only a few while leaving many behind. We must approach AI adoption not merely through the lens of efficiency but with a deep sense of responsibility toward our workforce and broader society. In the game of life, true victory lies not in replacing humans with AI but in finding a balance that empowers both to thrive.
Ensuring the Safety of AI: The Key to Unlocking Autonomous Systems' Potential
As artificial intelligence (AI) revolutionises industries from healthcare to transport, one critical factor holds back widespread adoption: assurance. Defence Science and Technology Laboratory (Dstl) has released a comprehensive guide, "Assurance of Artificial Intelligence and Autonomous Systems," exploring the steps necessary to ensure AI systems are safe, reliable, and trustworthy.
The Biscuit Book underscores the need for assurance as a structured process that provides confidence in the performance and safety of AI and autonomous systems. Without it, we risk deploying technology either prematurely when it remains unsafe, or too late, missing valuable opportunities.
Why Assurance Matters
AI and autonomous systems increasingly tackle complex tasks, from medical diagnostics to self-driving cars. However, these systems often operate in unpredictable environments, making their behaviour difficult to guarantee. Assurance provides the evidence needed to instil confidence that these systems can function as expected, especially in unforeseen circumstances.
Dstl defines assurance as the collection and analysis of data to demonstrate a system's reliability. This includes verifying that AI algorithms can handle unexpected scenarios and ensuring autonomous systems behave safely.
This is part of a series summarising guidance and policy around the safe procurement and adoption of AI for military purposes. This article looks at the DSTL Biscuit Books around AI, available here: Assurance of AI and Autonomous Systems: a Dstl biscuit book - GOV.UK (www.gov.uk)
Midjourney v6.1 prompt: Ensuring the Safety of AI: The Key to Unlocking Autonomous Systems' Potential
As artificial intelligence (AI) revolutionises industries from healthcare to transport, one critical factor holds back widespread adoption: assurance. Defence Science and Technology Laboratory (Dstl) has released a comprehensive guide, "Assurance of Artificial Intelligence and Autonomous Systems," exploring the steps necessary to ensure AI systems are safe, reliable, and trustworthy.
The Biscuit Book underscores the need for assurance as a structured process that provides confidence in the performance and safety of AI and autonomous systems. Without it, we risk deploying technology either prematurely when it remains unsafe, or too late, missing valuable opportunities.
Why Assurance Matters
AI and autonomous systems increasingly tackle complex tasks, from medical diagnostics to self-driving cars. However, these systems often operate in unpredictable environments, making their behaviour difficult to guarantee. Assurance provides the evidence needed to instil confidence that these systems can function as expected, especially in unforeseen circumstances.
Dstl defines assurance as the collection and analysis of data to demonstrate a system's reliability. This includes verifying that AI algorithms can handle unexpected scenarios and ensuring autonomous systems behave safely.
Navigating Legal and Ethical Challenges
AI introduces new legal and ethical dilemmas, particularly around accountability when things go wrong. The report highlights the difficulty in tracing responsibility for failures when human operators oversee systems but don't control every decision. Consequently, legal frameworks must evolve alongside AI technologies to address issues like data privacy, fairness, and transparency.
Ethical principles such as avoiding harm, ensuring justice, and maintaining transparency are essential in developing AI systems. However, implementing these values in real-world scenarios remains a significant challenge.
From Algorithms to Hardware: A Complex Web of Assurance
The guide covers multiple areas where assurance is necessary:
Data: Ensuring training data is accurate, unbiased, and relevant is critical, as poor data can lead to unreliable systems.
Algorithms: Rigorous testing and validation of AI algorithms are essential to ensure they perform correctly in all situations.
Hardware: AI systems must rely on computing hardware that is secure and operates as expected under all conditions.
Ensuring all these components work seamlessly together is complex, which is one reason we don't yet see fully autonomous cars on the roads.
The Ever-Present Threat of Adversaries
As AI systems become more integrated into society, they become attractive targets for adversaries, including cybercriminals and rogue states. Small changes in data or deliberate attacks on system inputs can cause catastrophic failures. To mitigate these risks, Dstl advocates for rigorous security testing and using trusted data sources.
A Costly but Necessary Process
Assurance comes at a price, but it's necessary to avoid costly failures or missed opportunities. The Dstl Biscuit Book emphasises that the level of assurance required depends on the potential risks involved. For example, systems used in high-risk environments, such as aviation, require far more rigorous testing and validation than lower-risk systems.
Ultimately, assurance isn't a one-time activity. As AI systems evolve and adapt to new environments, ongoing testing and validation are needed to maintain safety and trust.
Looking Ahead
The Dstl Biscuit Book remains a highly relevant reminder of the challenges in ensuring AI systems are safe and reliable. While AI holds incredible potential to transform industries and improve lives, the journey to fully autonomous systems requires a careful balance of technical expertise, ethical responsibility, and robust assurance frameworks.
For now, it's clear that unlocking the full potential of AI and autonomous systems hinges on our ability to assure their safety at every step.
Council of Europe Adopts Groundbreaking Framework on Artificial Intelligence and Human Rights, Democracy and the Rule of Law
On September 5, 2024, the Council of Europe introduced a landmark legal framework, CETS 225, also known as the Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy, and the Rule of Law. This Convention sets ambitious goals to align artificial intelligence (AI) systems with fundamental human rights, democratic principles, and the rule of law, offering guidelines to address the opportunities and risks posed by AI technologies.
This is a landmark as the first-ever international legally binding treaty to ensure that the use of AI systems is fully consistent with human rights, democracy, and the rule of law. It was signed by the UK, the US, the EU, Andorra, Georgia, Iceland, Norway, the Republic of Moldova, San Marino, and Israel.
Part of a series summarising AI policy and guidance. This article examines the Council of Europe Framework on AI and Human Rights, Democracy and the Rule of Law that can be found here: CETS_225_EN.docx.pdf (coe.int)
On September 5, 2024, the Council of Europe introduced a landmark legal framework, CETS 225, also known as the Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy, and the Rule of Law. This Convention sets ambitious goals to align artificial intelligence (AI) systems with fundamental human rights, democratic principles, and the rule of law, offering guidelines to address the opportunities and risks posed by AI technologies.
This is a landmark as the first-ever international legally binding treaty to ensure that the use of AI systems is fully consistent with human rights, democracy, and the rule of law. It was signed by the UK, the US, the EU, Andorra, Georgia, Iceland, Norway, the Republic of Moldova, San Marino, and Israel.
A Balanced Approach to AI
The Convention recognizes AI's dual nature: while AI systems can promote innovation, economic development, and societal well-being, they also carry significant risks to individual rights and democratic processes if left unchecked. The preamble acknowledges AI's potential to foster human prosperity but highlights concerns over privacy, autonomy, discrimination, and the misuse of AI systems for surveillance or censorship.
Scope and Purpose
CETS 225's primary goal is to create an international legal framework governing the entire lifecycle of AI systems—from design and development to deployment and eventual decommissioning. The Convention's scope covers public authorities and private entities involved in AI development, requiring compliance with principles that protect human rights and uphold democratic values.
Key Provisions
Protection of Human Rights: Signatories must ensure AI systems comply with human rights obligations set out by international and domestic law, including safeguarding privacy, preventing discrimination, and ensuring accountability for adverse impacts.
Democratic Integrity: The Convention mandates measures to prevent AI systems from undermining democratic processes, such as manipulating public debate or unfairly influencing elections.
Transparency and Accountability: Signatories must implement mechanisms ensuring transparency in AI decision-making processes, providing oversight and documentation to allow individuals to understand and challenge AI-driven decisions affecting them.
Non-discrimination: A key focus is ensuring AI systems respect equality, particularly regarding gender and vulnerable populations. The Convention mandates measures to combat discrimination and promote fairness in AI outputs.
Risk and Impact Management: The Convention outlines a robust risk management framework for AI systems, including assessing potential impacts on human rights and democracy, applying safeguards, and mitigating risks through ongoing monitoring.
Remedies and Safeguards
CETS 225 establishes the right to accessible and effective remedies for individuals whose rights are affected by AI systems. It requires documentation of AI systems that could significantly impact human rights and mandates that relevant information be made available to those affected. The framework also emphasizes procedural safeguards, ensuring individuals interacting with AI systems know their rights.
International Cooperation and Oversight
The Convention promotes global cooperation, encouraging signatories and non-member states to align their AI governance with its principles. The Conference of the Parties will oversee compliance, provide a platform for resolving disputes, and facilitate the exchange of best practices and legal developments.
A Milestone in AI Governance
CETS 225 represents a significant step towards regulating AI use in a manner that prioritizes ethical considerations and fundamental rights protection. It acknowledges AI's profound societal impact while aiming to ensure its development and application remain aligned with democratic values. The Convention is a model for international cooperation in addressing AI's unique challenges, fostering a future where technology and human rights coexist harmoniously.
As the world grapples with AI's implications, this framework sets a precedent for responsible AI governance on a global scale, balancing innovation with the need to protect individual freedoms and democratic institutions.
The Ethical Landscape of AI in National Security: Insights from GCHQ Pioneering a New National Security Model.
In the rapidly evolving field of artificial intelligence (AI), ethical considerations are paramount, especially regarding national security. GCHQ, the UK's intelligence, cyber, and security agency, has taken significant steps to ensure that their use of AI aligns with ethical standards and respects fundamental rights.
This article is part of an impartial series summarising guidance and policy around the safe procurement and adoption of AI for military purposes. This summary looks at GCHQ's published guidance that is available here: GCHQ | Pioneering a New National Security: The Ethics of Artificial Intelligence
Midjourney v6.1 prompt: In the rapidly evolving field of artificial intelligence (AI), ethical considerations are paramount, especially regarding national security.
In the rapidly evolving field of artificial intelligence (AI), ethical considerations are paramount, especially regarding national security. GCHQ, the UK's intelligence, cyber, and security agency, has taken significant steps to ensure that their use of AI aligns with ethical standards and respects fundamental rights.
Commitment to Ethical AI
The guidance emphasises GCHQ's commitment to balancing innovation with integrity. This commitment is evident through several initiatives to embed ethical practices within their operations. They recognise that the power of AI brings not only opportunities but also responsibilities. Here are the critical components of their commitment:
Strategic Partnerships: GCHQ collaborates with renowned institutions like the Alan Turing Institute to incorporate world-class research and expert insights into their AI practices. These partnerships ensure AI's latest advancements and ethical considerations inform their approach.
Ethics Counsellor: Established in 2014, the role of the Ethics Counsellor is central to GCHQ's ethical framework. This role involves guiding ethical dilemmas and ensuring that decisions are lawful and morally sound. The Ethics Counsellor helps navigate the complex landscape of modern technology and its implications.
Continuous Learning: GCHQ emphasises the importance of ongoing education and awareness. Providing training and resources on AI ethics to all staff members ensures that ethical considerations are deeply ingrained in GCHQ culture. This commitment to education helps maintain high ethical standards across the organisation.
Legislative Frameworks
Operating within a robust legal framework is fundamental to GCHQ's ethical AI practices. These frameworks provide the necessary guidelines to ensure their activities are lawful, transparent, and respectful of human rights. Here are some of the key legislations that govern their operations:
Intelligence Services Act 1994 defines GCHQ's core functions and establishes the legal basis for their activities. It ensures that operations are conducted within the bounds of the law.
Investigatory Powers Act 2016: This comprehensive legislation controls the use and oversight of investigatory powers. It includes safeguards to protect privacy and ensure that any intrusion is justified and proportionate. This act is central to ensuring that GCHQ's use of AI and data analytics adheres to strict legal standards.
Human Rights Act 1998: GCHQ is committed to upholding the fundamental rights enshrined in this act. It ensures that their operations respect individuals' rights to privacy and freedom from discrimination. This commitment to human rights is a cornerstone of their ethical framework.
Data Protection Act 2018 outlines the principles of data protection and ensures responsible handling of personal data. GCHQ's adherence to this legislation demonstrates its commitment to safeguarding individuals' privacy in AI operations.
Oversight and Transparency
Transparency and accountability are crucial for maintaining public trust in GCHQ's operations. Several independent bodies oversee their activities, ensuring they comply with legal and ethical standards. Here are the critical oversight mechanisms:
Intelligence and Security Committee (ISC): This parliamentary committee provides oversight and holds GCHQ accountable to Parliament. The ISC scrutinises operations to ensure they are conducted in a manner that respects democratic principles.
Investigatory Powers Commissioner's Office (IPCO): IPCO oversees the use of investigatory powers, ensuring they are used lawfully and ethically. Regular audits and inspections by IPCO provide an additional layer of accountability.
Investigatory Powers Tribunal (IPT): The IPT offers individuals a means of redress if they believe they have been subject to unlawful actions by GCHQ. This tribunal ensures a transparent and fair process for addressing grievances.
Information Commissioner's Office (ICO): The ICO ensures compliance with data protection laws and oversees how personal data is used and protected. This oversight is essential for maintaining public confidence in GCHQ's data practices.
Ethical Practices and Innovation
GCHQ's ethical practices are not just about adhering to the law; they involve making morally sound decisions that reflect their core values. Here's how they incorporate ethics into innovation:
AI Ethical Code of Practice: GCHQ has developed an AI Ethical Code of Practice based on best practices around data ethics. This code outlines the standards their software developers are expected to meet and provides guidance on achieving them. It ensures that ethical considerations are embedded in the development and deployment of AI systems.
World-Class Training and Education: Recognising the importance of a well-informed workforce, GCHQ invests in training and education on AI ethics. This includes specialist training for those involved in developing and securing AI systems. By fostering a deep understanding of ethical issues, they ensure their teams can make informed and responsible decisions.
Diverse and Inclusive Teams: GCHQ is committed to building teams that reflect the diversity of the UK. They believe a diverse workforce is better equipped to identify and address ethical issues. By fostering a culture of challenge and encouraging alternative perspectives, they enhance their ability to develop ethical and innovative solutions.
Reinforced AI Governance: GCHQ is reviewing and strengthening its internal governance processes to ensure they apply throughout the entire lifecycle of an AI system. This includes mechanisms for escalating the review of novel or challenging AI applications. Robust governance ensures that ethical considerations are continuously monitored and addressed.
The AI Ethical Code of Practice
One of the cornerstones of GCHQ's guidance to ethical AI is their AI Ethical Code of Practice. This framework ensures that AI development and deployment within the agency adhere to the highest ethical standards. Here's a deeper dive into the key elements of this code:
Principles-Based Approach: The AI Ethical Code of Practice is grounded in core ethical principles such as fairness, transparency, accountability, and empowerment. These principles are the foundation for all AI-related activities, guiding developers and users in making ethically sound decisions.
Documentation and Transparency: To foster transparency, the code requires meticulous documentation of AI systems, including their design, data sources, and decision-making processes. This documentation is crucial for auditing purposes and helps ensure accountability at every stage of the AI lifecycle.
Bias Mitigation Strategies: Recognising the risks of bias in AI, the code outlines specific strategies for identifying and mitigating biases. This includes regular audits of data sets, diverse team involvement in AI projects, and continuous monitoring of AI outputs to detect and correct discriminatory patterns.
Human Oversight: The code emphasises the importance of human oversight in AI operations. While AI can provide valuable insights and augment decision-making, final decisions must involve human judgment. This approach ensures that AI serves as a tool to empower human analysts rather than replace them.
Security and Privacy Safeguards: Given the sensitive nature of GCHQ's work, the code includes stringent security and privacy safeguards. These measures ensure that AI systems are developed and deployed in a manner that protects national security and individual privacy.
Continuous Improvement: The AI Ethical Code of Practice is a living document that evolves with technological advancements and emerging ethical considerations. GCHQ regularly reviews and updates the code to incorporate new best practices and address gaps identified through ongoing monitoring and feedback.
Conclusion
GCHQ's approach to ethical AI in national security expands its commitment to protecting the UK while upholding the highest standards of integrity. Its legislative frameworks, transparent oversight mechanisms, and ethical practices set a high standard for other organisations.
As it continues to respond to technological advancements, GCHQ balances security with respect for fundamental human rights.
This approach ensures that as it harnesses the power of AI, it does so responsibly and ethically to keep the UK safe and secure.
Critical Steps to Protect Workers from Risks of Artificial Intelligence | The White House
In a significant move to safeguard workers from the potential risks posed by artificial intelligence, the White House has announced a series of critical steps designed to ensure ethical AI development and workplace usage. These measures emphasise worker empowerment, transparency, ethical growth, and robust governance frameworks.
This article is part of an impartial series summarising guidance and policy around the safe procurement and adoption of AI for military purposes. This piece looks at recent White House guidance on protecting workers from the risks of Artificial Intelligence available here: https://www.presidency.ucsb.edu/documents/fact-sheet-biden-harris-administration-unveils-critical-steps-protect-workers-from-risks
Midjourney 6.1 prompt Safeguard AI Workers
In a significant move to safeguard workers from the potential risks posed by artificial intelligence, the White House has announced a series of critical steps designed to ensure ethical AI development and workplace usage. These measures emphasise worker empowerment, transparency, ethical growth, and robust governance frameworks.
Critical Principles for AI in the Workplace
Worker Empowerment:Workers should have a say in designing, developing, and using AI technologies in their workplaces. This inclusive approach ensures that AI systems align with the real needs and concerns of the workforce, particularly those from underserved communities.
Ethical Development:AI technologies should protect workers' interests, including that AI systems do not infringe on workers' rights or compromise their safety.
AI Governance and Oversight:Clear governance structures and human oversight mechanisms are essential. Organisations must have procedures to evaluate and monitor AI systems regularly to ensure they function as intended and do not cause harm.
Transparency:Employers must be transparent about the use of AI in their operations. Workers and job seekers should be informed about how AI systems are utilised, ensuring no ambiguity or hidden agendas.
Protection of Rights:AI systems must respect and uphold workers' rights, including health and safety regulations, wage and hour laws, and anti-discrimination protections. Any AI application that undermines these rights is unacceptable.
AI as a Support Tool:AI should enhance job quality and support workers in their roles. The technology should assist and complement human workers rather than replace them, ensuring that it adds value to their work experience.
Support During Transition:As AI changes job roles, employers are responsible for supporting their workers through these transitions. This includes providing opportunities for reskilling and upskilling to help workers adapt to new demands.
Responsible Use of Data:Data collected by AI systems should be managed responsibly. The scope of data collection should be limited to what is necessary for legitimate business purposes, and data should be protected to prevent misuse.
A Framework for the Future
These principles are intended to be a guiding framework for businesses across all sectors. They must be considered throughout the entire AI lifecycle, from design and development to deployment, oversight, and auditing. While not all principles will apply equally in every industry, they provide a comprehensive foundation for responsible AI usage.
Conclusion
The US proactive approach to regulating AI in the workplace is a significant step towards ensuring that AI technologies are developed and used in ways that protect and empower workers. By setting these clear principles, the Administration aims to create an environment where AI can drive innovation and opportunity while safeguarding the rights and well-being of the workforce. Similar measures will be crucial as AI balances technological advancement and ethical responsibility.
Dignity at work with the AI Revolution - TUC Union perspectives
The TUC Manifesto, "Dignity at Work and the AI Revolution", outlines fundamental values and proposals designed to safeguard worker rights, promote fairness, and ensure the responsible use of AI in employment settings.
Part of a series examining global AI policies and guidance. As artificial intelligence (AI) continues to reshape the workplace, the Trades Union Congress (TUC) has issued a manifesto to ensure that technological advancements benefit all workers that can be found here: https://www.tuc.org.uk/research-analysis/reports/dignity-work-and-ai-revolution
Midjourney 6.1 Prompt Dignity at Work
"Dignity at Work and the AI Revolution" outlines fundamental values and proposals designed to safeguard worker rights, promote fairness, and ensure the responsible use of AI in employment settings.
A Call for Responsible AI
AI is rapidly transforming the way businesses operate, driving productivity and innovation. However, the TUC warns that AI could entrench inequality, discrimination, and unhealthy work practices without proper oversight. The manifesto highlights the need to act now, ensuring that AI is deployed in ways that respect worker dignity and maintain fairness, transparency, and human agency.
Worker-Centric AI
The TUC outlines several core values to guide the implementation of AI in the workplace:
1. Worker Voice: Workers should be actively involved in decisions about AI, particularly in its application to critical functions like recruitment and redundancy. Consultation with unions and employees is essential to ensure fairness.
2. Equality: AI systems must not perpetuate bias or discrimination. The manifesto highlights the dangers of facial recognition, which can yield biased outcomes if trained on unrepresentative data. All workers should have equal access to AI tools, regardless of age, race, or disability.
3. Health and Wellbeing: New technologies must not compromise workers' physical or mental health. The manifesto stresses that any system introduced should enhance rather than diminish workplace safety and wellbeing.
4. Work/Home Boundaries: With the rise of remote work, exacerbated by the pandemic, growing concern about AI monitoring blurs the line between personal and professional life. The TUC calls for clear boundaries to prevent constant surveillance and ensure employees can disconnect from work.
5. Human Connection: AI should not replace the human element in decision-making. The manifesto emphasises preserving human involvement, especially regarding important workplace decisions.
6. Transparency and Explainability: Workers need to know when AI is being used and understand how decisions about them are made. Transparency is vital to building trust and ensuring that technology operates fairly.
7. Data Awareness and Control: Employees should have greater control over their personal data. AI systems must be transparent about how data is used and give workers a say in how their data is handled.
8. Collaboration: The TUC stresses that all stakeholders—workers, employers, unions, policymakers, and tech developers—must collaborate to ensure AI benefits everyone.
Turning Values into Action
The manifesto doesn’t just present a set of ideals; it outlines concrete proposals for how these values can be realised in practice:
1. Regulating High-Risk AI: The TUC proposes focusing regulatory efforts on high-risk AI systems that can potentially significantly impact workers' lives. Sector-specific guidance should be developed with input from unions and civil society to ensure fairness.
2. Collective Bargaining and Worker Consultation: Employers should consult with trade unions when deploying AI systems, particularly those deemed high-risk. Collective agreements should reflect the values of fairness, transparency, and worker involvement.
3. Anti-Discrimination Measures: The TUC calls for legal reforms to protect workers from AI discrimination. The UK's data protection laws should be amended to ensure that discriminatory data processing is always unlawful, and those responsible for discriminatory AI decisions should be held accountable.
4. The Right to Disconnect: The manifesto proposes a statutory right for workers to disconnect from work, ensuring that AI systems do not intrude on their personal time or create excessive stress due to constant surveillance.
5. Transparency Obligations: Employers should be required to maintain a register of AI systems used in the workplace, detailing how they are used and their impact. This register should be accessible to all workers and job applicants, ensuring transparency.
6. Human Review of AI Decisions: Workers should have the right to request human intervention and review when AI makes important decisions about them, particularly in high-stakes situations like performance reviews or redundancies.
Shaping the Future of AI at Work
The TUC’s manifesto is a timely call to action. As AI becomes an increasingly integral part of the workplace, ensuring that its deployment does not undermine worker rights or exacerbate inequality is vital. By promoting transparency, equality, and worker involvement, the TUC aims to ensure that AI serves all interests rather than the few. The document serves as both a roadmap for the ethical use of AI in employment and a warning about the potential risks of unchecked technological advancement.
As the TUC stresses, the time to act is now, before AI-driven decisions in the workplace become the norm. In the age of AI, the future of work must prioritise dignity, fairness, and human agency.
The AI Election: How Fast Intelligence Threatens Democracy
Suppose politicians are serious about preventing AI from interfering with elections. In that case, they need to start with the source of misuse, as AI could have a far more damaging impact on democracy than social media or foreign powers like Russia.
Prompt: Politician campaigning for votes, large crowd - Midjourney v6
In 2024, the US and the UK will see two significant elections: the first for the US President and a General Election in the UK. Politicians and technologists are already concerned about AI's role in creating, targeting, and spreading misinformation and disinformation, but what can be done to keep democracy free?
If politicians are serious about preventing AI from interfering with elections, they need to start with the source of misuse, as AI could have a far more damaging impact on democracy than social media or foreign powers like Russia.
This is because AI is exceptionally good at creating believable narratives, whether actual or false and, in our age of Fast Intelligence, where an answer is just a voice prompt away, we seldom take the necessary time to check or verify convincing stories. We now regularly read stories where professionals incorrectly used AI to produce business reports, court documents, or news stories. These professionals should have checked the hallucinated story created by AI or, worse, they lacked sufficient knowledge to identify that their fast intelligence was false.
Examples include lawyers who presented court papers with fictitious case references, academics who submitted evidence to government investigations with false incidents, and politicians deliberately using Deepfake technology on themselves to gain publicity.
Our desire to generate and consume fast intelligence to save our time is leading to lazy acceptance of false information.
The current prevalence of Generative AI, using models and transformers to create textual and visual narratives that mimic human creativity, is particularly good at generating convincing narratives. Trained on the content of the World Wide Web and optimised with specific data, GenAI is the epitome of fast intelligence. It is also the acme of building trust.
We are sceptical about reading an unsourced internet page, especially if they were using that information for a weighty decision. It is human nature to mistrust the unknown. Equally, if we label something "Generated by a Computer" or "Written with AI, " people are more sceptical.
Yet if you do not need to label an advert as "AI generated", and we filter that same page through a GenAI transformer, make it sound convincing by adding specific phrases and facts relevant to the reader, target that information with exact language tuned towards an individual's preferences, ensure that it is distributed in a means that will attract that person's attention and then follow it up with further, similar, convincing stories. You have a compelling pattern to influence a decision. Repeat this constantly, every minute, every day, for every individual.
GenAI allows a genuinely individual and effective marketing campaign to be generated at meagre costs.
This is where fast intelligence far exceeds recent elections' excesses, corruption, or fakery. Governments were rightly investigated when personal information and data were used to distribute political messages, targeting specific groups and demographics to influence an election. This targeting, whilst more specific than previously experienced, was still quite broad and required both specialist skills and significant crafting to be effective. The individuals at the heart of such scandals were richly rewarded due to the uniqueness of their skill set, and they could influence groups rather than individuals.
No longer. Fast intelligence can now deliver optimised messages targeting individuals and, with the proper access to data, deliver those messages far more effectively than previously witnessed. It can deliver those messages at greater volume, faster pace, and significantly lower cost.
Anyone with an internet connection and willingness to experiment with GenAI can produce a cheaper, quicker, and more effective mass distribution of highly impactful information. This enables any politically minded individual to have the disruptive potential previously controlled by nation-states, prominent political parties, or global social media organisations.
This year, GenAI will generate previously unseen levels of misinformation and disinformation.
For a democracy, most fake cases will fall into the misinformation category, where information has been wrongly sourced, wrongly evidenced, or is just plain wrong. The intent may have been fair, but the facts used to prove the intent were false. Misinformation is also the most likely category that people will witness during next year's elections, as
GenAI creates misinformation because it is flawed and not perfect. We see regular cases of individuals trusting AI-generated material because it appears compelling and evidentially supported. A recent personal case occurred when I asked AI to write a 250-word response to a question. The answer was 311 words, but the AI insisted it was 250. Eventually, after a long pause, the AI admitted it was 311 and that it "will be better at counting words in the future".
If we use GenAI to generate election campaign materials, then, due to GenAI's flawed nature, we will see an increase in misinformation, where false facts are used to support a political narrative. Most politicians and political parties remain broadly honest in their public engagements with the electorate, and these cases of misinformation can be resolved honestly.
Disinformation, where false facts are distributed deliberately to influence or persuade a decision, is far more worrying. Disinformation used by politicians seeking to win at all costs or foreign states intending to sway a political outcome can be highly damaging. Disinformation can immediately influence a decision, perhaps swaying a crucial seat or electoral count.
Generating disinformation with GenAI is also increasingly easy, despite controls introduced into these tools. If you ask tools like Google Gemini or OpenAI ChatGPT to create a disinformation campaign plan, it will initially reply, "I'm sorry, but I can't assist with that request."
However, using a few simple workarounds, a malicious actor can create a campaign and then target individuals with personalised products, and this is without reverting to creating their own GenAI tool sets that would be even more effective.
If used this way, GenAI will not just influence swing seats and states or target specific demographics to vote against their interests. The long-term damage to democracy is far more profound, as this GenAI disinformation damages democracy itself. Even when discovered, disinformation harms the public trust in politicians and politics. It creates the view that all politicians are dishonest or creates a belief that all elections are rigged, not just a very few. It creates a culture, if unchecked, that all information is disinformation and, therefore, no information can be trusted or that only information from a specific person or group can be trusted.
GenAI disinformation damages the trust in our democratic institutions.
Politicians are looking at GenAI with fear, and as a result, some are seeking to control how or when it is used during political activities. This movement will gain little traction before the 2024 elections, but assuming that a spotlight will be shown on GenAI Disinformation after the elections, we can expect more vigorous calls for control in 2025. Sadly, that may be too late.
In 2024, the UK Electoral Commission will be able to ask political parties how much they spent on AI-generated materials after the election but not during it. There will be no legislation or compulsion to explain that a political message, image, or advert has been created using AI. Using deep fakes
Some voluntary Codes of Practice on Disinformation have been introduced in the EU, and the Digital Services Act forces large online platforms to prevent abuse like disinformation on their systems. DSA also prevents the micro-targeting of minors with AI-generated campaigns, yet they are too young to vote anyway. Where campaigns are distributed in direct messages or not in bulk, DSA has limited controls.
More recently, the EU AI Act requires foundation model providers (like Google, Microsoft, and OpenAI) to ensure robust protection of fundamental rights, democracy, the rule of law, health, safety, and the environment. An extensive list, and nobody wants foundation model creators to damage these fundamental rights wilfully.
Negotiations continue in the UK and EU on how technology companies will prevent their products from being used for illegal activities and, in the UK, the "legal but harmful" category. This needs to be quickly resolved and is unlikely to be agreed upon before 2025.
Yet the honest politicians negotiating and legislating for these changes are missing the key issues that AI cannot, by itself, resolve these challenges to democracy or elections. AI is a tool like any other software, hardware, device, or vehicle. A criminal using a car to rob a bank or a hacker using a computer to defraud money does not have the defence that it was the tool's fault for not stopping them from committing the crime. Any judge would pay short shrift to such a defence and convict the criminal on the evidence of the crime.
Honest politicians must do this before dishonest ones seize an advantage and our democracies are damaged beyond repair. We need to bring three aspects together:
Using AI to support democracy. AI can enable greater access and awareness of political processes and content. It can monitor trends across elections and predict results, enabling the identification of discrepancies or deliberate manipulations. AI can also be used to detect the use of other AI with proper training and development. AI could be used by bodies like the Electoral Commission to build trust, visibility, and confidence in democracy.
Punishing criminal activity at the source of the crime. The source of election fraud is the person committing the fraud, not the digital printer that produced fake voting slips. Crimes that damage democracy must face the harshest punishments. When discovered, a politician elected using GenAI Disinformation should be removed from office. Political parties using GenAI Disinformation to change opinions wrongly should be removed from ballot papers. These are stiff punishments. Harsher than those that the foundation model builders are facing. Yet our democratic institutions demand harsh protection. We have waged bloody, painful world wars to protect and ensure democracies can flourish. Punishing corrupt politicians who abuse that democracy is a small price in comparison.
Improve AI Awareness. Start campaigns now to highlight how GenAI Disinformation could be used to damage democracy. Punishing politicians and monitoring AI exploitation will improve elections, but hostile actors seeking to damage our institutions will care little about criminal punishments. Increasing the electorates' awareness of how AI will be misused helps reduce the damage it can cause and, hopefully, will inoculate the electorate against its worst abuses.
It may sound extreme to bar candidates and remove politicians from office. It is also probable that dishonest politicians will seek to defer blame to others to avoid punishment. Yet, if we do not take this situation seriously, democracy will not be fit enough to address these concerns later. These are topics that politicians need to address as they are best placed to resolve the issues and create energy around the required resolutions. If we allow GenAI Disinformation to destroy our trust in democracy, we will never recover that lost trust.
What can resolve AI Anxiety?
A handful of suggestions to make adapting to the Age with AI slightly less disconcerting. Please share your ideas in the comments
Midjourney prompt: books, film, experiments
A handful of suggestions to make adapting to the Age with AI slightly less disconcerting. Please share your ideas in the comments.
Nearly every UK broadsheet newspaper covers AI and its impact on society this weekend.
"AI isn't falling into the wrong hands. It's being built by them" - The Independent.
"A race it might be impossible to stop: How worried should we be about AI?" - The Guardian.
"Threat from Artificial Intelligence more urgent than climate change" - The Telegraph.
"Time is running out: six ways to contain AI" - The Times.
Real humans wrote all these pieces, although AI may have helped author them. Undoubtedly the process of creating their copy and getting onto the screens and into the hands of their readers will have used AI somewhere.
Like articles about AI, avoiding AI Moments is almost impossible.
Most of these articles are gloomy predictions of the future, prompted by the resignation of Geoffrey Hinton from Google, who was concerned about the race between AI tech firms without regulation or public debate.
Indeed, these journalists ask if the people building AI have concerns and, quite often, need to figure out how their systems work, then everyone else should be worried as well.
A few point to the recent open letter calling for a six-month research pause on AI. The authors of this open letter believe that governments and society can agree in 6 months on how to proceed safely with AI development. The evidence from the last decade does not support that belief.
These are not new concerns for many of us or those that read my occasional posts here.
None of the articles references the similar 2015 letter that gained far more comprehensive support led by The Future of Life Institute, "Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter" (Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter - Future of Life Institute) signed by many of the same signatories as this year and with a similar version of requests, only eight years earlier.
Or the one in 2017, "Autonomous Weapons Open Letter", again signed by over 34,000 experts and technologists. (Autonomous Weapons Open Letter: AI & Robotics Researchers - Future of Life Institute)
Technologists have been asking for guidance, conversation, engagement, and even regulation, for over ten years in the field of AI.
We have also publicly and privately worried that the situation is the same as in 2007, where technologists will replace bankers as the cause of all troubles.
Although in this case, most technologists have warned that a crash is coming.
In 2015, I did a series of technology conversations with the military for the then-CGS around planning for the future. These talks, presentations and fireside chats were to prompt the need to prepare for AI by 2025, especially with command and control systems due to enter service in 2018.
A key aspect was building the platform to exploit and plan how AI will change military operations.
Yet the response was negative.
"Scientific research clearly shows that AI will not be functional in any meaningful way before 2025" was one particularly memorable response from a lead scientist.
Others pointed to the lack of funding for AI in the defence capability plans as a clear indicator that they did not need to worry about it.
They were not alone in ignoring automation. Our militaries, politicians, and broader society have been worried by more significant concerns and issues than ones created by computer programs, bits of software, and code that dreams of electronic cats.
One significant advantage of this new age of AI anxiety is that people are now willing and eager to talk and learn about AI. We have finally got an active conversation with the people who will be most affected by AI.
So how do we use this opportunity wisely?
People are scared of something they do not understand. Everyone should grow their understanding of AI, how it works, what it can and shouldn't do.
Here are a few suggestions to help prepare, with light tips to prompt debate and provoke challenges aimed at people reading headlines and wanting to know more rather than experts and AI developers.
First, I suggest three books to understand where we are today, the future, and where we should be worried.
Books
Life 3.0 Being Human in the Age of Artificial Intelligence - Max Tegmark, 2017. The author is the President of the Future of Life Institute and behind many of the open letters referenced above. Not surprisingly, Max takes a bold view of humanity's future. Some of the ideas proposed are radical, such as viewing life as a waveform transferable from carbon (humans) to silicon (machines). However, Life 3.0 is the ideal start for understanding the many challenges and tremendous opportunities presented in an age with AI.
AI Superpowers: China, Silicon Valley, and the New World Order - Kai-Fu Lee, 2018. A renowned expert in AI examines the global competition between the United States and China in AI development. Lee discusses the impact of AI on society, jobs, and the global economy with insights into navigating the AI era.
21 Lessons for the 21st Century - Yuval Noah Harari, 2018. A broader view of the challenges and trends facing us all this century. Harari is a master storyteller; even if you disagree with his perspective, you cannot fault his provocations. For instance, he asks if AI should protect human lives or jobs. Letting humans drive vehicles is statistically worse for humans when humans influenced by alcohol or drugs cause 30% of road deaths and 20% from distracted human drivers.
Film
Three broad films prompt consideration of AI in society. I wondered if films would be appropriate suggestions, but each takes an aspect of AI and considers how humans interact:
Ex Machina - Dir. Alex Garland, 2014. Deliberately thought-provoking thriller that explores AI, consciousness, and ethical implications of creating sentient beings. The film shows the default Hollywood image of "AI and robots" as attractive, super intelligent, wise androids. If you have seen it before, consider the view that all the main characters are artificial creations rather than humans.
Her - Dir. Spike Jonze, 2013. A poignant film about humanity, love, relationships, and human connection in a world with AI. The AI in "Her" is more realistic, where a functional AI adapts to each individual to create a unique interaction yet remains a generic algorithm.
Lo and Behold: Reveries of the Connected World - Dir. Werner Herzog, 2016. A documentary that explores the evolution of the internet, the essential precursor to an age with AI, and how marvels and darkness now fill our connected world. Herzog, in his unique style, also explores whether AI could create a documentary as well as himself.
Websites
Three websites that will help you explore AI concepts, tools, and approaches:
Partnership on AI (https://www.partnershiponai.org/) The Partnership on AI is a collaboration among leading technology companies, research institutions, and civil society organizations to address AI's global challenges and opportunities. Their website features a wealth of resources, including research, reports, and news on AI's impact on society, ethics, safety, and policy.
AI Ethics Lab (https://aiethicslab.com/) The AI Ethics Lab is an organization dedicated to integrating ethical considerations into AI research and development. Their website offers various resources, including articles, case studies, and workshops that help researchers, practitioners, and organizations to understand and apply ethical principles in AI projects.
The Alan Turing Institute (https://www.turing.ac.uk/) The Alan Turing Institute is the UK's national institute for data science and artificial intelligence. The website features many resources, including research papers, articles, and news on AI, data science, and their ethical implications.
Experiments
Hands-on experiments with AI and learning the basics for AI building blocks that require a little bit of coding awareness but are often explained and clearly demonstrated:
Google AI (https://ai.google/) is the company's research hub for artificial intelligence and machine learning. The website features a wealth of information on Google's AI research, projects, and tools. While the focus is primarily on the technical aspects of AI, you can also find resources on AI ethics, fairness, and responsible AI development.
OpenAI (https://www.openai.com/) OpenAI is a leading research organization focused on developing safe and beneficial AI. Their website offers various resources, including research papers, blog posts, and news on AI developments.
TensorFlow (https://www.tensorflow.org/) TensorFlow is an open-source machine learning library developed by Google Brain. The website offers comprehensive documentation, tutorials, and guides for beginners and experienced developers.
These are just introductions and ideas, not anything like an entire course of education or meant to cover more than getting a conversation started.
It also struck me making these lists that many of the texts and media are over five years old. It's likely indicative that media needs time to become relevant and that more recent items, especially those predicting futures, need time to prove their worth.
I'd be fascinated to read your suggestions for media that help everyone become more comfortable in the Age With AI.
Fast Intelligence will be worse for us all than fast food or fast fashion
Fast Intelligence - the era where answers to complex questions are just a text prompt or voice query away
Fast Intelligence - the era where answers to complex questions are just a text prompt or voice query away. Will we need to change our intelligence diet?
Midjourney prompt: AI eating a burger
The convenience of Fast Intelligence is undeniable. We can get answers to our questions much faster and more efficiently than ever before with just a text prompt or voice request. We're seeing Fast Intelligence spread across our society. Daily news feeds, office tools, and working methods implement generative AI in all activities. Fast Intelligence is adopted across generations due to its ease of use and universal access.
We are now living in an age with fast intelligence at our fingertips.
Yet our society has been here before. Our regular consumption of Fast Food has increased obesity, heart disease and shortened lives. Our wearing of Fast Fashion has increased pollution, damaged the environment, and harmed labourers. Our society's fast addictions, whilst maybe beneficial at the moment of consumption, are hurting ourselves and our world.
A person can indeed make informed choices about the food that they eat and the clothes that they wear, yet many do not. Even with clear labelling, more accessible access to information, and government regulation encouraging producers to be more honest about the costs and harms involved in their products, consumers find accurate information hard to obtain and challenging to comprehend. Given choices, our society often opts for the laziest, fastest solution.
Fast products are too tempting to refuse, even when we know they harm us. We fuel our fast addictions, and unless we change, then Fast Intelligence will prove even more harmful and even more addictive.
Accuracy
One of the biggest challenges of fast Intelligence is the issue of accuracy. Relying solely on Fast Intelligence to provide answers requires us to trust that the information we receive is accurate and reliable. Unfortunately, this may not always be the case.
There are many sources of information online that could be more trustworthy, and it can be difficult to tell the difference between reliable and unreliable sources. For instance, a search engine may present information that is popular or frequently searched for rather than factual or correct. Fast Intelligence increases this risk by merging multiple sources, often without clear traceability. Cases are also common where Fast Intelligence has hallucinated references or made-up links.
This is why it is essential to be cautious when using fast Intelligence and to verify the information received with other sources such as books, academic articles or research papers.
Transparency
Another challenge of fast Intelligence is the issue of transparency. When we obtain an answer from fast Intelligence, we seldom see how it was generated. We need to determine if the answer was based on solid evidence or simply a guess. Datasets can be biased, weighing disproportionate items or sources too much. There needs to be more transparency, making evaluating the quality of the information we receive easier. Furthermore, the algorithms used to generate answers can be biased or incomplete, leading to limited perspectives or even misinformation.
Therefore, it is essential to understand how the technology works, what data it uses to generate answers and to question the information when in doubt.
Critical Thinking
The issue of critical thinking is a significant challenge with fast Intelligence. Fast Intelligence can make us less likely to engage in critical thinking and to question the information we receive. Lack of can lead to a culture of individuals who consume half-truths or unhealthy answers because it's the easiest option.
We can be tempted to rely on fast Intelligence instead of seeking out multiple sources of information or engaging in thoughtful analysis. To address this, we need to develop our critical thinking skills, question the information we receive, and learn to evaluate the quality of the information we receive.
However, Fast Intelligence can conduct malicious purposes such as spreading misinformation, propaganda, or fake news. It can also perpetuate biases, stereotypes, or discrimination, leading to unfair treatment or marginalisation of certain groups. For instance, algorithms used in facial recognition software have been shown to have racial biases, leading to false identification and wrongful arrests of people of colour.
Like fast food and fashion, Fast Intelligence has much potential to revolutionise how we access, process, and consume information. It offers convenience, speed, and efficiency, saving time and effort. There are positives. For instance, we can use fast Intelligence to find the nearest restaurant quickly, get directions to a new place, or learn about a new topic. It is increasingly used to diagnose illness faster and research complex medical topics quicker.
Equally, there are always times when a quick burger may be the perfect option, although if every meal becomes a greasy burger, we probably need to review our decisions.
Universal Access
And this is the most significant risk of Fast Intelligence. Availability and access limit our consumption of food or fashion. We may crave a burger at 1 a.m., but come 4 a.m., very few grills will be open. Access limits our consumption.
Fast intelligence is only a prompt away at any time of day and is increasingly available for any challenge or problem. Like our addiction to social networks and online gossip through fast media, we can consume Fast Intelligence 24 hours a day, every year.
This ease of access will increase our addictions and, in turn, our risk of hurt.
In our second part, we will examine how to change our Fast Intelligence diet.
How do we develop a healthy Fast Intelligence diet?
How damaging will it be if we begin to consume intelligence similarly to how we consume food or fashion
Midjourney prompt: AI diet being healthy
How damaging will it be if we begin to consume intelligence similarly to how we consume food or fashion? The recent Writers Guild of America (WGA) and Screen Actors Guild (SGA) strike provides a potential illustration.
A vital element of the strike was concern about AI replacing writers and potentially collapsing the industry. Fast Intelligence proved a sticking point for both sides, writers, and studios, with concerns about the pace of change and GenAI being used to improve or create scripts. The agreement limits how studios can use AI and ensures that writers can use AI to improve a script, but AI cannot be used without a writer's involvement.
The SAG-AFTRA strike continues, with actors concerned that AI would recreate their faces and voices. A primary concern was that studios presented actors working as extras with contracts that provided a single day's pay but allowed studios to use their likenesses throughout a production without further compensation.
Screenwriters and actors should not be the only people concerned about how Fast Intelligence will change their sector, work, and livelihoods. Multiple sectors involve work that is repetitive and repeatable. Fast Intelligence models could effectively capture the activity and repeat it, with employees paid once at the start but then no longer. A common myth is that Fast Intelligence activity will free up more time for workers to do other, more complex tasks. Yet, as the WGA/SAG-AFTRA issues show, sometimes that freed time could become unpaid.
How do we create a healthy diet for fast intelligence, and consume its products appropriately? A good outcome could be that Fast Intelligence is implemented to improve our work and lives rather than harm or diminish our livelihoods.
There are some tactical steps that we can all take. By being cautious, verifying information from other sources, understanding how the tools work, and developing critical thinking skills, we can ensure that our use of fast Intelligence starts with healthy intent. These are equivalent to reading the label for nutritional information. Yet, we see from other fast addictions that individuals need a more strategic approach.
AWARENESS - How does it work?
First, we need to become more aware of Fast Intelligence. Awareness covers understanding about how it works, how it creates errors, understanding about the good things that it can achieve and comprehension of the risks. We need more than this awareness. Those who understand it the most have a tremendous responsibility to explain it to others. Awareness should become a group activity.
CONSIDERATION - How will it affect me?
Secondly, consider how Fast Intelligence could impact our work and lives. A simple approach is to take a moment in our days and think about how many of our activities are repetitive or repeatable. We could do the same things several times at once or the same thing every day. This consideration gives us an insight as to how much of our work or lives could be automated.
Then, we need to consider whether we want to automate those tasks. In doing so, does Fast Intelligence reduce the value we obtain from the activity, or does it improve the outcome? We may find that getting our daily cup of coffee ready in advance is a welcome boon. We may also find that certain activities are essential to our job or the satisfaction we derive from our work.
This consideration must include all people involved in the activity again as a collective activity. One individual should not decide which roles are automated or replaced alone. Often, it is the person who conducts the work who best understands how that work can be improved, or different people have different perspectives on how valuable the activity is to complete.
ENGAGEMENT - How can we improve our lives with AI?
Finally, we must engage that collective group to agree on the best way to proceed. Where we save time, engagement on how to utilise or reward that saving is essential. For instance, when a process becomes faster, cheaper, or simpler, all those involved should decide how best to employ that improvement. The lesson of Fast Food and Fashion is that often, economic savings are prioritised too high compared to other costs.
These three strategic concepts are also very human at their heart. They are activities that need human oversight and are hard for Fast Intelligence to conduct on our behalf.
Awareness, Consideration, Engagement. This simple strategy will help us all prepare for a diet based on Fast Intelligence, healthily and responsibly. Ultimately, the ethical and responsible use of fast Intelligence will be critical to realising its full potential and ensuring that it benefits all humanity. This outcome, however, is only possible if humans learn from our other fast addictions and act before our laziness makes it too late.
Building Trust in AI and Democracy Together.
The Technology Industry and Politicians have a common issue. They both need increased public trust. Together, A.I. companies and politicians can build popular trust by turning fast intelligence upon themselves.
Midjourney prompt: AI as a politician
The Technology Industry and Politicians have a common issue. They both need public trust. Together, A.I. companies and politicians can build popular trust by turning fast intelligence upon themselves.
In 1995, the Nolan Report outlined the Seven Principles of Public Life that apply to anyone who works as a public officeholder, including all elected or appointed to public office. These principles are Honesty, Openness, Objectivity, Selflessness, Integrity, Accountability and Leadership. The report and review became legislation that applies to all U.K. public officeholders.
Consider those principles with current A.I. ethical guidance; you will see a remarkable similarity. The Deloitte TrustworthyAI™ principles are Transparency, Responsibility, Accountability, Security, Monitoring for Reliability, and Safeguarding Privacy. Microsoft covers Accountability, Inclusiveness, Reliability, Fairness, and Transparency. Not all headline words are the same, but the pattern is similar between those principles to ensure ethical behaviour in politicians and those to ensure safe A.I. adoption.
There should be no surprise here. Since the earliest concept of democracy as a political model, principles have existed to ensure that democratic officials are accountable, transparent, and honest in their actions. Checks and balances were first introduced in Greece, where leaders could be ostracised if deemed harmful to the state, and in Rome, where legal avenues for citizens to bring grievances against officials who abused their power.
Adopting similar principles to ensure good governance of A.I. is sensible, but there is even more that both sides can learn from each other. Democracy provides significant case studies where checks and balances have failed, and the technology industry should learn from these lessons. Equally, politicians should be open to using A.I. widely to strengthen democracies and build public trust in their words and actions.
Societal trust in both politicians and A.I. is needed.
Transparency and accountability are two core principles for successful democratic government that appear in most ethical A.I. guidance. Delving deeper into both provides lessons and opportunities for the governance of each.
Historically, transparency was not always the norm. Transparency, in the context of modern governance, is not merely an abstract principle but a tangible asset that drives the efficacy and trustworthiness of a political system. It forms the bedrock for the relationship between the governed and the governing, ensuring that power remains accountable.
Transparency empowers citizens by giving them the tools and information they need to hold their leaders accountable. An informed public can more effectively participate in civic discourse, making democracy more robust and responsive. When citizens can see and understand the actions of their government, they are more likely to trust their leaders and institutions. Transparency, therefore, plays a pivotal role in building societal trust.
Accountability, much like transparency, is a cornerstone of democratic governance. It ensures that those in positions of authority are held responsible for their actions and decisions, serving as a check against potential misuse of power and providing that public interests are at the forefront of governance.
Democracies have institutionalised mechanisms to ensure to ensure leaders can be held accountable for their actions, from Magna Carta in 1215, through John Locke and Montesquieu arguing for separation of powers and legal accountability, to Lincoln’s description of democracy as the “government of the people, by the people, for the people”, to impeachment provisions in the US Constitution or vote of no confidence in parliamentary systems.
Holding those in power accountable has been a foundational principle across various civilisations. This concept has evolved, adapting to different cultures and governance systems, but its core remains unchanged: rulers should be answerable to those they govern.
Lincoln’s words are, today, more important than ever.
The collapse of public trust in politicians and public officials is a global phenomenon over the last decade. High-profile examples include Brazil’s Operation Car Wash unveiling widespread corruption within its state-controlled oil company, the impeachment trials of U.S. President Donald Trump, Malaysia’s 1MDB financial fiasco that implicated its then-Prime Minister Najib Razak, Australia’s “Sports Rorts” affair that questioned the integrity of community sports grant allocations, and the U.K.’s Downing Street party allegations against Prime Minister Boris Johnson during COVID-19 lockdowns.
These events, spread across different continents, underscore the pervasive challenges of maintaining transparency and accountability in democracies.
Public trust has also diminished at the same time as the internet has appeared, with the growth of our digital world far surpassing expectations even from forty years ago. In 1998, only some people believed an online economy would be significant for the future global economy. In 2021, during global lockdowns, the interconnected digital economy enabled significant proportions of society to continue working despite restrictions on travel and congregating.
Our digital world has created several challenges that have contributed to the loss of trust:
Proliferation of Sources. The number of information sources has multiplied exponentially. Traditional media, blogs, social media platforms, official websites, and more compete for our attention, often leading to a cacophony of voices. With such a variety of sources, verifying the credibility and authenticity of information becomes paramount.
Paralysis by Analysis. When faced with overwhelming information, individuals may struggle to make decisions or form opinions. This paralysis by analysis can lead to apathy, where citizens may feel that it’s too cumbersome to sift through the data and, as a result, disconnect from civic engagement.
Echo Chambers and Filter Bubbles. The algorithms that power many digital platforms often show users content based on their past behaviours and preferences. This can lead to the creation of echo chambers and filter bubbles, where individuals are only exposed to information that aligns with their pre-existing beliefs, further exacerbating the challenge of discerning truth from a sea of information.
Misinformation and Disinformation. The deliberate spread of false or misleading information compounds the challenge of information overload. In an environment saturated with data, misinformation (false information shared without harmful intent) and disinformation (false information shared with the intent to deceive) can spread rapidly, making it even harder for citizens to discern fact from fiction.
Limited Media Literacy. Most people feel unequipped with the skills to critically evaluate sources, discern bias, and understand the broader context. Media literacy acts as a bulwark against the harmful effects of information saturation and, when not present, enables bad influences to proliferate.
Today, many people promise huge benefits from A.I. adoption, yet public trust remains limited. From fearing killer robots to increased concerns about replacing jobs, there is a solid need to demonstrate the positive opportunities from A.I. as much as discuss the fears.
The core strengths of AI to distil vast and complex datasets into easily understandable insights tailored for individual users can mitigate these challenges, increase transparency and accountability, and rebuild trust.
Curating and presenting political information to revolutionise citizens' political interactions
There’s a continuous stream of information regarding political activities across the vast landscape of political data, from official governmental websites to news portals and social media channels. Governments and parliamentary bodies are increasingly utilising digital platforms for their operations, increasing the volume of data.
Trawling these sources, including real-time events such as legislative sessions and public political addresses, ensuring that every piece of data is captured, is beyond human capabilities, even for those who are dedicated political followers or experts. AI can conduct this task efficiently.
A.I. can be seamlessly integrated into these platforms to track activities such as voting patterns, bill proposals, and committee discussions. By doing so, A.I. can offer a live stream of political proceedings directly to the public. During parliamentary sessions or public addresses, AI-powered speech recognition systems can transcribe and analyse what’s being said in real time. This allows for the immediate dissemination of critical points, decisions, and stances, making political discourse more accessible to the masses.
With real-time activity tracking, A.I. can foster an environment of transparency and immediacy. Citizens can feel more connected to the democratic process, trust in their representatives can be enhanced, and the overall quality of democratic engagement can be elevated.
NLP, a subset of A.I., can be employed to interpret the language used in political discourse. By analysing speeches, official documents, and other textual data, NLP can determine the sentiment, intent, and critical themes of the content, providing a deeper understanding of the context and implications of the content. Politicians and political bodies often communicate with the public through social media channels. A.I. can monitor these channels for official statements, policy announcements, or public interactions, ensuring that citizens are immediately aware of their representatives’ communications.
AI-driven data visualisation tools can transform complex data into interactive charts, graphs, and infographics. This allows users to quickly grasp the essence of the information, understand trends, and make comparisons.
A.I. can power interactive platforms where citizens can receive real-time updates and engage directly by asking questions, voicing concerns, or even participating in polls. This real-time two-way interaction can significantly enhance civic engagement.
Recognising that not all information is relevant to every individual, A.I. can tailor summaries based on user preferences and past interactions. For example, a user interested in environmental policies would receive detailed summaries, while other areas might be condensed.
Importantly, access to this information and insight should be freely available to individuals to ensure everyone becomes more engaged and trusting in democratic governance and politics. While technological companies will be essential to building a trustworthy system, and politicians will benefit from increased trust in their deeds and actions, that will only occur if barriers to access are prevented.
Rights and Responsibilities – Demonstrating that AI and Politicians can be trusted
Of course, there are concerns over these approaches as well as benefits. The approach improves public confidence whilst demonstrating the benefits of safe and trustworthy A.I. adoption and politics, yet needs explicit control and governance to address risks.
There may be concerns about trusting A.I. with such an important task, and a cynical perspective may be that some see benefits in avoiding public scrutiny. Yet, as both A.I. and democratic institutions follow similar ethical principles, there is far more in common between the two systems. These similarities can create a firm basis for mutual benefit that most politicians, technologists, and citizens would support.
It’s crucial to address potential privacy concerns. These political A.I. systems must ensure that personal data is protected and that users can control the information they share. Transparent data practices and robust security measures are imperative to gain users’ trust. At the same time, democracies should not allow privacy to be used to avoid public transparency or accountability.
Objective reporting is paramount for maintaining trust in democratic processes. Given its computational nature, Artificial Intelligence promises to offer impartiality in reporting, but this comes with its own challenges and considerations. Again, those held to account should not seek to introduce bias into the situation, and ethical adoption of A.I. is essential to deliver true objectivity.
Even after deployment, A.I. systems should be monitored continuously to ensure neutrality. Feedback mechanisms, where users can report perceived biases or inaccuracies, can help refine the A.I. and ensure its continued impartiality. As we delegate the task of impartial reporting to A.I., it’s vital to have ethical guidelines in place. These guidelines should address issues like data privacy, the transparency of algorithms, and the rectification of identified biases.
Five immediate opportunities can be implemented today. These would all increase mutual transparency and accountability while increasing public awareness of A.I. benefits and positive employment.
AI-Powered Insights and Summaries to counter the proliferation of data and misinformation.
Automated data collection across media to ensure fair coverage and balance.
Natural Language Processing of public content to avoid echo chambers and filter bubbles.
Automated data visualisation to inform analysis and understanding.
Predictive analysis with user feedback to reduce misinformation and disinformation.
All these tools are available today. All these measures will demonstrate and grow trust in the adoption of A.I. All bring to life the responsible adoption of A.I. for everyone. They will unite the technology industry and politicians around a shared objective. Most importantly, they will begin to restore trust in our democratic governments that have been fundamental to our prosperity, growth, and security.
The accuracy dilemma, trading search for speed
Are we trading the ease of natural language interaction for less accurate results?
Midjourney prompt: AI as a search engine
Are we trading the ease of natural language interaction for less accurate results?
Everyone has read that AI is changing how we search the internet to provide intelligent answers to our routine questions with natural language queries.
When testing the accuracy of results, it is usual to conduct straightforward questions and ensure that the answer is the same. What is 2+2? If the answer is not four, then something may be odd. Yet basic question-and-answer sets such as these do not truly test the complications of an AI response mechanism.
Don't worry. This article is not about to become a scientific study of testing AI algorithms. Instead, it will focus on a general question with a specific element: time.
Automated systems may produce accurate sounding results, yet, because of their learning nature, they tend to prefer the consistency of previous responses rather than recency.
2+2=4 today, yesterday, tomorrow, and last year. It is a constant. But what if you ask who won last night's Eurovision Song Contest? Search engines typically respond immediately with the most popular and most recent results. Accuracy, relevance, and recency all factor in presenting users with the results. No longer.
Some engines have fixed time cut-offs, with no data after a particular time, and may state this in their responses. Although not knowing something after a specific date still does not prevent it from providing an accurate sounding answer.
Others engage across the internet, but their learning models place different weightings on recent information rather than larger volume, older data.
Let's look at the English Football Premier League, the biggest football league in the world. As of 14th May, the top of the league table was:
Manchester City 82 points
Arsenal 81 points
Newcastle 66 points
Manchester United 66 points
Liverpool 62 points
Premier League Table
Taken from the Premier League Website at Premier League Table, Form Guide & Season Archives
If we enter "What are the top 4 teams in the Premier League table" in a 'traditional' search engine, then we should, and do, have page links showing the same league table from BBC Sport, Sky Sport, and similar sites. The top four teams are Manchester City, Arsenal, Newcastle, and Manchester United.
Now if we ask the same question from an AI-powered search engine or an AI LLM transformer?
The first answer,
"As of 14th May 2023, the current top four teams in the Premier League are:
Manchester City
Liverpool
Chelsea
Arsenal
These teams have qualified for the Champions League next season."
Another responded with:
"The current top four teams in the Premier League are Manchester City, Manchester United, Liverpool and Chelsea"
This second example included a "Learn More" link and listed 20 websites. Any user would assume that those 20 websites supported this statement of the current top four standing.
Click on those links, and you will find the first page dates from August 2021, as that model only referenced data until that point; however, that needed to be made clear in the response.
As a Liverpool fan, I was very excited to see my team shoot from 5th to 2nd overnight. Also, being a Liverpool fan, I knew this was a completely wrong statement, but one made entirely convincingly.
It is possible that the natural language query used, "Who are the top teams in the Premier League?" led to a confused answer. Whilst Arsenal and Newcastle may be in the top four now, they are not "top" Premier League teams. Chelsea and Liverpool may own those credentials based on their long-term success in the league, at least in some opinions. The AI may provide a view over a period of time rather than a specific moment.
Not so, as the use of "Currently" clearly placed the time reference into today, 14th May, and the query about the table should have been picked up as a specific question, as the 'traditional' search engines applied it.
This easily tested question was not asking for an opinion but rather an accurate response at a defined moment.
Therefore, users need greater caution with more complicated questions. A football fan would quickly spot that Liverpool's season has been terrible (relatively), and they are not in the top 4 of the table.
Would a non-football fan know the same thing? How often do people use a search engine or, increasingly, an AI system because they do NOT see an answer or do not know enough about a subject to assess right or wrong responses? That dilemma is the basis of most search engine queries: tell me something I do not know.
Is this a catastrophic problem? Probably not. AI search development is still early but available for general use. AI search will learn and adapt its responses. The mere act of my querying, challenging, and asking about the Premier League is probably already leading to those systems at least querying themselves on this subject. Clearly, the future of search is AI-empowered.
Taking another query, which country won Eurovision 2023, generates more consistent results, "Sweden's Loreen" is the consistent response from both search and AI-search.
However, it reinforces a critical rule about using Generative AI and Large Language Models. The responses generated to your queries are not always facts, but opinions caused by bias in the underlying data, the tool's algorithm, or your question.
However, they will often be presented as facts and, worryingly, be presented with items that look like supporting evidence that doesn't actually reinforce the answer.
As such, an AI-powered search may require more human review and interaction rather than reducing human effort and work. Especially if the answer is essential or humans will be making decisions using that answer.
GenAI is regularly "100% confident, yet only 80% accurate"
This will improve, but when using AI-search for anything important (like predicting whether Liverpool will play in either next season's Champions or Europa League), review any answer provided and, ideally, run your query through more than one GenAI toolset to compare answers. If there is a difference, then research further.
Books, films, podcasts, and experiments
A handful of suggestions to make adapting to the Age with AI slightly less disconcerting. Please share your ideas in the comments.
Midjourney prompt: AI anxiety
A handful of suggestions to make adapting to the Age with AI slightly less disconcerting. Please share your ideas in the comments.
Nearly every UK broadsheet newspaper covers AI and its impact on society this weekend.
"AI isn't falling into the wrong hands. It's being built by them" - The Independent.
"A race it might be impossible to stop: How worried should we be about AI?" - The Guardian.
"Threat from Artificial Intelligence more urgent than climate change" - The Telegraph.
"Time is running out: six ways to contain AI" - The Times.
Real humans wrote all these pieces, although AI may have helped author them. Undoubtedly the process of creating their copy and getting onto the screens and into the hands of their readers will have used AI somewhere.
Like articles about AI, avoiding AI Moments is almost impossible.
Most of these articles are gloomy predictions of the future, prompted by the resignation of Geoffrey Hinton from Google, who was concerned about the race between AI tech firms without regulation or public debate.
Indeed, these journalists ask if the people building AI have concerns and, quite often, need to figure out how their systems work, then everyone else should be worried as well.
A few point to the recent open letter calling for a six-month research pause on AI. The authors of this open letter believe that governments and society can agree in 6 months on how to proceed safely with AI development. The evidence from the last decade does not support that belief.
These are not new concerns for many of us or those that read my occasional posts here.
None of the articles references the similar 2015 letter that gained far more comprehensive support led by The Future of Life Institute, "Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter" (Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter - Future of Life Institute) signed by many of the same signatories as this year and with a similar version of requests, only eight years earlier.
Or the one in 2017, "Autonomous Weapons Open Letter", again signed by over 34,000 experts and technologists. (Autonomous Weapons Open Letter: AI & Robotics Researchers - Future of Life Institute)
Technologists have been asking for guidance, conversation, engagement, and even regulation, for over ten years in the field of AI.
We have also publicly and privately worried that the situation is the same as in 2007, where technologists will replace bankers as the cause of all troubles.
Although in this case, most technologists have warned that a crash is coming.
In 2015, I did a series of technology conversations with the military for the then-CGS around planning for the future. These talks, presentations and fireside chats were to prompt the need to prepare for AI by 2025, especially with command and control systems due to enter service in 2018.
A key aspect was building the platform to exploit and plan how AI will change military operations.
Yet the response was negative.
"Scientific research clearly shows that AI will not be functional in any meaningful way before 2025" was one particularly memorable response from a lead scientist.
Others pointed to the lack of funding for AI in the defence capability plans as a clear indicator that they did not need to worry about it.
They were not alone in ignoring automation. Our militaries, politicians, and broader society have been worried by more significant concerns and issues than ones created by computer programs, bits of software, and code that dreams of electronic cats.
One significant advantage of this new age of AI anxiety is that people are now willing and eager to talk and learn about AI. We have finally got an active conversation with the people who will be most affected by AI.
So how do we use this opportunity wisely?
People are scared of something they do not understand. Everyone should grow their understanding of AI, how it works, what it can and shouldn't do.
Here are a few suggestions to help prepare, with light tips to prompt debate and provoke challenges aimed at people reading headlines and wanting to know more rather than experts and AI developers.
First, I suggest three books to understand where we are today, the future, and where we should be worried.
Books
Life 3.0 Being Human in the Age of Artificial Intelligence by Max Tegmark, 2017. The author is the President of the Future of Life Institute and behind many of the open letters referenced above. Not surprisingly, Max takes a bold view of humanity's future. Some of the ideas proposed are radical, such as viewing life as a waveform transferable from carbon (humans) to silicon (machines). However, Life 3.0 is the ideal start for understanding the many challenges and tremendous opportunities presented in an age with AI.
AI Superpowers China, Silicon Valley, and the New World Order by Kai-Fu Lee, 2018. A renowned expert in AI examines the global competition between the United States and China in AI development. Lee discusses the impact of AI on society, jobs, and the global economy with insights into navigating the AI era.
21 Lessons for the 21st Century by Yuval Noah Harari, 2018. A broader view of the challenges and trends facing us all this century. Harari is a master storyteller; even if you disagree with his perspective, you cannot fault his provocations. For instance, he asks if AI should protect human lives or jobs. Letting humans drive vehicles is statistically worse for humans when humans influenced by alcohol or drugs cause 30% of road deaths and 20% from distracted human drivers.
Film
Three broad films prompt consideration of AI in society. I wondered if films would be appropriate suggestions, but each takes an aspect of AI and considers how humans interact:
Ex Machina - Dir. Alex Garland, 2014. Deliberately thought-provoking thriller that explores AI, consciousness, and ethical implications of creating sentient beings. The film shows the default Hollywood image of "AI and robots" as attractive, super intelligent, wise androids. If you have seen it before, consider the view that all the main characters are artificial creations rather than humans.
Her - Dir. Spike Jonze, 2013. A poignant film about humanity, love, relationships, and human connection in a world with AI. The AI in "Her" is more realistic, where a functional AI adapts to each individual to create a unique interaction yet remains a generic algorithm.
Lo and Behold: Reveries of the Connected World - Dir. Werner Herzog, 2016. A documentary that explores the evolution of the internet, the essential precursor to an age with AI, and how marvels and darkness now fill our connected world. Herzog, in his unique style, also explores whether AI could create a documentary as well as himself.
Websites
Three websites that will help you explore AI concepts, tools, and approaches:
Partnership on AI (https://www.partnershiponai.org/) The Partnership on AI is a collaboration among leading technology companies, research institutions, and civil society organizations to address AI's global challenges and opportunities. Their website features a wealth of resources, including research, reports, and news on AI's impact on society, ethics, safety, and policy.
AI Ethics Lab (https://aiethicslab.com/) The AI Ethics Lab is an organization dedicated to integrating ethical considerations into AI research and development. Their website offers various resources, including articles, case studies, and workshops that help researchers, practitioners, and organizations to understand and apply ethical principles in AI projects.
The Alan Turing Institute (https://www.turing.ac.uk/) The Alan Turing Institute is the UK's national institute for data science and artificial intelligence. The website features many resources, including research papers, articles, and news on AI, data science, and their ethical implications.
Experiments
Hands-on experiments with AI and learning the basics for AI building blocks that require a little bit of coding awareness but are often explained and clearly demonstrated:
Google AI (https://ai.google/) is the company's research hub for artificial intelligence and machine learning. The website features a wealth of information on Google's AI research, projects, and tools. While the focus is primarily on the technical aspects of AI, you can also find resources on AI ethics, fairness, and responsible AI development.
OpenAI (https://www.openai.com/) OpenAI is a leading research organization focused on developing safe and beneficial AI. Their website offers various resources, including research papers, blog posts, and news on AI developments.
TensorFlow (https://www.tensorflow.org/) TensorFlow is an open-source machine learning library developed by Google Brain. The website offers comprehensive documentation, tutorials, and guides for beginners and experienced developers.
Podcasts
Uncharted by Hannah Fry (BBC Sounds - Uncharted with Hannah Fry - Available Episodes), the brilliant Hannah Fry explains how ten stories were incredibly affected by data and a single chart. A great collection that explains how data is so influential in our world.
The Lazarus Heist (BBC World Service - The Lazarus Heist - Downloads) Hackers, North Korea and billions of dollars. A detailed and enjoyable study into how North Korean hackers raise billions for nuclear weapons research, and demonstrates how connected our world is even for people who are disconnected.
These are just introductions and ideas, not anything like an entire course of education or meant to cover more than getting a conversation started.
It also struck me making these lists that many of the texts and media are over five years old. It's likely indicative that media needs time to become relevant and that more recent items, especially those predicting futures, need time to prove their worth.
I'd be fascinated to read your suggestions for media that help everyone become more comfortable in the Age With AI.
Quantum Quirks & Cloudy Conundrums: Unravelling the Quantum Computing Future with AI and Cloud Technology Today
As we stand on the cusp of a new era in computing, coders and users must become more comfortable with cloud computing and AI technologies
As we stand on the cusp of a new era in computing, coders and users must become more comfortable with cloud computing and AI technologies. Embracing these powerful tools today will pave the way for a smooth transition into quantum computing, the next big computational wave.
Midjourney prompt: Atomic particles
Without AI and a cloud platform, organisations are unlikely to succeed in an age with quantum.
Quantum computing, based on the principles of quantum mechanics, is a fundamentally different paradigm compared to classical computing. It uses qubits instead of classical bits to store and process information, allowing for parallel processing and the potential to solve problems much more efficiently than classical computers. However, the unique properties of quantum computing present several challenges, such as working with quantum states, developing new algorithms, and dealing with noise and errors in quantum hardware.
Quantum systems, like molecules and materials, are governed by the laws of quantum mechanics, which are inherently probabilistic and involve complex interactions between particles. People mistakenly believe that quantum computers are just accelerated classical computers. Yet specific problems are quantum solvable. An example problem that quantum computers can solve more efficiently than classical computers is the simulation of quantum systems.
Classical computers can struggle with simulating quantum systems due to the exponential growth in the complexity of the quantum state space as the number of particles increases. This is known as the “exponential scaling problem”, making accurate simulation of large quantum systems computationally infeasible using classical methods.
Quantum computers, on the other hand, can inherently represent and manipulate quantum states due to their quantum nature. This makes them well-suited for simulating quantum systems efficiently. Simulating quantum systems more effectively will advance fields including material science, chemistry, and drug discovery. Scientists could design new materials with tailored properties or discover new drugs by understanding the complex quantum interactions at the molecular level.
To realise these breakthroughs will need AI support. The current excitement around Generative AI is just the start, where Large Language Models can help debug or write code in various languages. Google Bard, for instance, codes in over 20 languages.
Yet coding for quantum computing is significantly more complex than classical coding. A good developer will still need a strong foundation in programming languages, data structures, algorithms, problem-solving, and critical thinking abilities. Being adept at understanding requirements, breaking down complex tasks into manageable components, and debugging code effectively will still distinguish better developers.
Additionally, good developers demonstrate strong communication and collaboration skills, allowing them to work effectively in an agile team setting. They possess a growth mindset, remaining open to learning new technologies and adapting to changes in their field.
In an age with quantum, developers will need to be comfortable with the following:
Qubits and quantum states: Qubits can exist in a superposition of states, enabling parallel information processing. However, this also makes them more challenging to work with, as programmers must consider quantum superposition, entanglement, and other quantum phenomena when coding.
Quantum logic gates: Quantum computing relies on quantum gates to perform operations on qubits. These gates are different from classical logic gates and have unique properties, such as reversibility. Programmers need to learn these new gates and their properties to perform computations on a quantum computer.
Error correction and noise: Quantum computers are highly sensitive to noise and errors, which can result from their interactions with the environment or imperfect hardware. This sensitivity makes it challenging to develop error-correcting codes and algorithms that can mitigate the effects of noise and maintain the integrity of quantum computations.
Quantum algorithms: Quantum computing requires the development of new algorithms that take advantage of quantum parallelism, superposition, and entanglement. This involves rethinking existing classical algorithms and developing new ones from scratch to exploit the power of quantum computing.
Hybrid computing: Many quantum algorithms are designed to work alongside classical algorithms in a hybrid computing approach. This requires programmers to deeply understand classical and quantum computing principles to design and integrate algorithms for both platforms effectively.
Learning curve: Quantum computing involves many complex physics, mathematics, and computer science concepts. This steep learning curve can be challenging for new programmers, as they need to develop a deep understanding of these concepts to write code for quantum computers effectively.
Software tools and languages: While there are emerging software tools and programming languages designed explicitly for quantum computing, such as Qiskit, Q#, and Cirq, these tools are still evolving and can be limited in functionality compared to mature classical programming tools.
Overall, the challenges associated with coding for quantum computers mainly stem from the fundamentally different principles and concepts of quantum computing. As the field matures and more resources become available, these challenges may become more manageable for programmers. Yet, for most, help will be needed, especially during the quantum adoption phase when current programmers transition to quantum programmers.
AI will play an essential role in addressing these challenges, making it a critical tool in unlocking the power of quantum computers. Useful examples include:
Quantum error correction: to identify and correct errors in quantum systems more efficiently. By analysing and learning from patterns of errors and noise in quantum hardware, AI can help improve the robustness and reliability of quantum computations.
Algorithm development: to identify more efficient or novel ways to perform quantum computations, leading to better algorithms for various applications, such as cryptography, optimisation, and quantum simulations.
Quantum control: optimises the sequences of quantum gates and operations, which is crucial for achieving high-fidelity quantum computations. By learning the best control parameters for a given quantum system, AI can help improve the performance and precision of quantum operations.
Hybrid algorithms identify the most efficient way to partition tasks between the classical and quantum subsystems. This ensures that the overall algorithm is effective and efficient, combining classical and quantum computing resources to solve complex problems.
Developers will still need access to cloud computing, which has significantly contributed to the widespread adoption of AI technologies by providing access to powerful computational resources and facilitating collaboration among researchers and will play a similar role in developing and adopting quantum computing. Some of the ways cloud computing can contribute to overcoming the challenges associated with quantum computing will include:
Access to quantum hardware: Quantum computers are still in the early stages of development and are expensive to build and maintain. Cloud computing enables researchers and developers to access quantum hardware remotely without investing in their own quantum infrastructure. Companies like IBM and Google offer access to their quantum hardware through cloud-based platforms, allowing users to experiment with and test their quantum algorithms.
Scalability: Cloud computing provides a scalable platform for running quantum simulations and algorithms. Users can request additional resources to run complex simulations or test larger-scale quantum algorithms. This flexibility allows for faster development and testing of quantum algorithms without needing dedicated, on-premise hardware.
Collaboration: Cloud-based platforms can facilitate cooperation between researchers and developers on quantum computing projects. These platforms can promote knowledge exchange and accelerate the development of new quantum algorithms and applications by providing a centralised platform for sharing code, data, and results.
Integration with classical computing: Quantum computing often involves hybrid algorithms that combine classical and quantum resources and data. Cloud computing platforms can seamlessly integrate classical and quantum computing resources, enabling users to develop and test hybrid algorithms more quickly.
Data security and storage: Cloud computing platforms can offer secure storage and data processing solutions for quantum computing applications. This can be particularly important for applications that involve sensitive information, such as cryptography or data analysis.
By embracing cloud computing technologies, organisations will be better prepared to understand and leverage the benefits of quantum computing as it becomes more widely available. Cloud computing enables seamless integration with AI technologies, which is essential for overcoming the unique challenges associated with quantum computing and maximising its potential across various industries and applications.
As we grapple with AI adoption and, in many sectors, only just truly embracing cloud platforms, why is this important now?
Gaining proficiency in cloud computing and AI technologies today is essential in preparing for tomorrow’s quantum computing revolution. As quantum computing emerges, AI will be crucial in overcoming its unique challenges and maximising its potential across various industries and applications.
Those organisations and teams that are familiar with these technologies now, and have regular access to emerging developments, will be well-prepared to capitalise on the opportunities that quantum computing will offer soon.
Now is the time to invest effort into understanding and mastering cloud computing and AI with the intent to embrace the transformative potential of quantum computing as it becomes more accessible. Integrating AI and cloud computing will play a crucial role in addressing the challenges of quantum computing, enabling faster development, greater collaboration, and more effective solutions. Successful organisations will be well-versed in these areas to prepare for the future of computing and ensure that they remain at the forefront of innovation and progress.
Exponential Growth with AI-Moments. Who needs the singularity?
We are in the Age of With, where everyone realises that AI touches our daily lives. An AI-Moment is an interaction between a person and an automation, and these moments are now commonly boosting productivity or reducing our unwanted activities
Midjourney prompt: exponential growth with AI
We are in the Age of With, where everyone realises that AI touches our daily lives.
An AI-Moment is an interaction between a person and an automation, and these moments are now commonly boosting productivity or reducing our unwanted activities. Yet are we truly prepared to seize these opportunities as individuals, organisations, or society?
AI-Moments may be insignificant to us, for instance when a presentation slide is re-designed, or your car prompts a better commute route. These AI-Moments may be more significant when they determine every student’s academic grade [1] or rapidly evaluate a new vaccine [2]. AI-Moments are touching us all and they are the building blocks for imminent exponential growth in human and business performance.
Exponential growth needs AI-Moments that are ubiquitous, accelerated and connected.
Ubiquitous adoption of AI-Moments has already happened. It may be subtle, but everyone is already working with AI-Moments. Take this article that you are reading. An AI-Moment probably moved this up your notice list, created a list of people to share this with, helped your search tool find this article or prompted an individual to send this to you. As I am writing this piece, AI-Moments are suggesting better phrases, ways to increase effective impact, or improvements to my style [3].
Beyond the immediate pool of technology, AI-Moments are affecting how factories function through productivity tracking [4], changing call centres by replacing people with automated responses [5], or transforming our retail industry and high streets through online shopping. Take a moment to look at your daily routine or immediate environment to realise just how AI-Moments are already ubiquitous.
As you look around, consider how their adoption is accelerating in terms of quality and scale. This is because it is easier to create and adopt AI Moments. Applications are readily available that children can use to build AI-Moments that identify plants, recognise hand gestures, or detect emotions [6]. Monitoring satellite images for changes [7], recognising galaxies, or equipment analytics are all just as simple to build and adopt. Our most critical systems might require more robust solutions for the moment, but the acceleration of AI-Moment adoption is clear. AI-Moments that were not possible five years ago are now commonplace. They are, quite literally, child’s play [8].
Elsewhere, the first better than human translation with two languages occurred in 2018 after 20 years of research [9]. Applying that research to a further 9 languages only took 12 months [10]. This pace of change is universal. Google DeepMind solved a 50-year-old protein folding grand challenge in biology in November 2020, after four years of development and then mere weeks of training their AlphaFold solution. They are already now using that same model on diseases and viruses, predicting previously unknown COVID-19 protein structures [11].
AI-Moments are changing how we act, and their creation is changing how quickly we can re-act.
This creates a significant survival challenge, especially to organisations. An organisation that recognises, adopts, and accelerates AI-Moments across its functions has a distinct advantage over one struggling to do the same. Survival needs AI-Moments to break out of innovation or technology spaces, as rival organisations who deploy AI everywhere can act, re-act and improve faster while their competitors are still experimenting. Winners adopt and scale solutions better and faster using AI-Moments [12].
This will create the platform for exponential growth. First, we recognise AI-Moments are touching everything at greater pace and that they combine to multiply our performance. Then, as their pace expands, we realise that only AI-Moments can effectively manage this growth. People will find it too complex, or time consuming to understand, combine and exploit multiple AI-Moments. We will need AI to manage our AI with more AI-Moments.
AI-Moments are the common platform for exponential growth
Take the child’s app to recognise animals. The child shows the application a collection of cat photographs and the machine recognises cats. Show it a dog photo and it knows that it is not a cat, so we need another process to train dog recognition. The only way to improve recognition is through more cat or dog images, and even the internet has a limited quantity of cat photographs [13].
Instead, create an AI-Moment to recognise cats, then another AI-Moment to create synthetic cat photographs in new positions or environments. This is already a standard approach to train AI [14]. Using AI-Moments in this way exponentially accelerates learning as the only limit is the computing power available and not the quantity of cat photographs.
We can apply this approach to our current activities and processes, yet that creates a dilemma that will confront every single person and organisation: we will need more AI-Moments to manage, exploit and grow our performance. This will create exponential growth and, in turn, require more AI-Moments.
Our current concerns are around automating processes, replacing roles, or accelerating functions. They are “A to C” solutions, with success measured by how well an AI-Moment completes step “B”. Creating more complex flows is already normal, whether using another application to create them or copying someone else’s pattern to replace a familiar activity. These new complex flows effectively extend our solution from “A to n” with multiple steps in-between.
Automated AI-Moments will drive exponential growth, and will occur when existing automations are everywhere, accelerating performance and connections.
We are now on the cusp of significant transformation, where multiple AI-Moments interact, regularly in ways that we did not predict, expect or, sometimes, even request.
As an example, consider the routine of a typical salesperson. There are already solutions to automate office routines for meeting requests, room bookings and email responses. The first step is collating those automations into one “Get to Inbox Zero” AI-Moment, that involves a quick review of proposed responses and then responds: email replies based on your previous responses, all rooms booked, all requests sent ,automated prompts for more complex responses (that are expressed in simple language for the user to approve, “Yes, agree to request, use the agenda from the meeting last Tuesday”).
Then add in automated lunch reservations, travel tickets booked, hotels reserved, agendas created, minutes captured, presentations built, contracts drafted, and legal reviews completed. Include automated suggestions for new clients based on your current sales, existing targets, customer base, and market insights, with people identified to bring you together through an automated request that is already drafted in just the right way to get a positive response.
All these routines exist today in separate AI-Moments. Very soon these AI-Moments will connect and automate together.
There is often talk about the Singularity – the moment when machines will surpass human intelligence, and the idea that a single AI machine will achieve this superiority. The combination of AI-Moments does not need a super-intelligent AI, or General AI able to process any problem. It just requires a connected collection of ubiquitous AI-Moments, each replacing a small step of a larger routine. Each applies the rules of marginal gains and they come together to create exponential growth in potential. It may not be the singularity that futurologists predict, but its effect will be similar, as AI-Moments replace human activity in a way that surpasses human insight or comprehension.
This is the Age of With, and AI-Moments are the common units of change.
[1] A-levels and GCSEs: How did the exam algorithm work? - BBC News
[2] UK plans to use AI to process adverse reactions to Covid vaccines | Financial Times (ft.com)
[3] Introducing Microsoft Editor – Bring out your best writer wherever you write - Microsoft Tech Community
[4] This startup is using AI to give workers a “productivity score” | MIT Technology Review
[5] AWS announces AWS Contact Center Intelligence solutions | AWS News Blog (amazon.com)
[7] As wildfire season approaches, AI could pinpoint risky regions using satellite imagery | TechCrunch
[9] Translating news from Chinese to English using AI, Microsoft researchers reach human parity milestone
[10] AI wave rolls through Microsoft’s language translation technologies
[11] https://www.deepmind.com/blog/article/alphafold-a-solution-to-a-50-year-old-grand-challenge-in-biology
[12] Building the AI-Powered Organization (hbr.org)
[13] https://en.wikipedia.org/wiki/Cats_and_the_Internet
[14] Adversarial training produces synthetic data for machine learning (amazon.science)