Practical AI (Sep 2024)
A four-week course with Stephen Reid
Zoom link: https://us06web.zoom.us/j/9082171779
Week 1
AI (Artificial Intelligence), ML (Machine Learning) and DL (Deep Learning)
Jay's Visual Intro to AI/images/image38.png)
/images/image43.png)
– Seema Singh, Towards Data Science
See Machine Learning Algorithms In Layman’s Terms, Part 1 and What is Deep Learning? - MachineLearningMastery.com to learn more about the different fields of machine learning/deep learning (Unsupervised Learning, Supervised Learning, Reinforcement Learning)
The state of DL-powered AI
AI already beats the average human at a number of key tasks…
/images/image61.png)
…and is expected to surpass humans on a bunch more over the coming years
/images/image28.png)
Recommended newsletters and podcasts
"With so many new AI newsletters popping up, it can be overwhelming to figure out which you'd actually enjoy reading. To choose the right AI newsletter, consider your level of expertise in the field. If you're a beginner, AI Breakfast. If you know a bit more about tech, you might want a newsletter that talks about daily software and tools you could implement today, if so – Ben's Bites. And lastly, if you want a newsletter that dives deep into technical discussions and emerging research you should go with The Batch. You really won't go wrong with any of them, they're all extremely interesting reads about technology that is actively changing the world."
My favourites:
/images/image62.png)
Co-Intelligence: Living and Working with AI by Ethan Mollick | Goodreads
AI tool directories
The best case
/images/image15.png)
"If we can safely harness the power of AI for human betterment,
then we can paint a utopian future our ancestors could hardly fathom.
A future free of disease and hunger,
where biotechnology has stabilised the climate and biodiversity.
Where abundant clean energy is developed in concert with AI;
Where breakthroughs in rocketry and materials sciences
have propelled humans to distant planets and moons;
And where new tools for artistic and musical expression
open new frontiers of beauty, experience, and understanding."
– THE HUMAN FUTURE: A Case for Optimism
Reducing toil:
Catalysing creativity:
Promoting health:
Improving education:
History of deep learning
/images/image25.png)
– A brief history of AI - Raconteur
1950s and 1960s: Birth and Initial Excitement
Photo of Frank Rosenblatt from Professor’s perceptron paved the way for AI – 60 years too soon
Perceptron Research from the 50's & 60's, clip
- The concept of a perceptron, an early type of neural network, was introduced by Frank Rosenblatt in the late 1950s.
- This period saw the first wave of excitement about the potential of these models to mimic brain-like computation and solve complex problems.
1970s: First Winter
- Marvin Minsky and Seymour Papert published "Perceptrons" in 1969, which highlighted the limitations of perceptrons, especially their inability to solve non-linearly separable problems (like the XOR problem). This led to a decline in interest and funding for neural network research.
“What Rosenblatt wanted was to show the machine objects and have it recognize those objects. And 60 years later, that’s what we’re finally able to do,” Joachims said. “So he was heading on the right track, he just needed to do it a million times over. At the time, he didn’t know how to train networks with multiple layers. But in hindsight, his algorithm is still fundamental to how we’re training deep networks today… He lifted the veil and enabled us to examine all these possibilities, and see that ideas like this were within human grasp.”
1980s: Revival with Backpropagation
-
The introduction of the backpropagation algorithm in the 1980s by David Rumelhart, Geoffrey Hinton, and Ronald Williams allowed for training of multi-layer perceptrons, overcoming the limitations pointed out by Minsky and Papert.
- This led to a resurgence in interest in neural networks. The availability of more powerful computers also helped fuel this interest.
1990s: Second Winter
/images/image72.png)
– What Was Actually Wrong With Backpropagation in 1986?, slide by Geoffrey Hinton
- Neural networks, especially deeper ones, suffered from issues like vanishing and exploding gradients, making them difficult to train.
- Other machine learning techniques like Support Vector Machines (SVMs) and simpler methods became more popular due to their better performance in various tasks and clearer theoretical foundations.
2010s: Deep Learning Revolution
-
This is when neural networks, especially deep neural networks, began achieving state-of-the-art results in various tasks such as image recognition (e.g., ImageNet competition), speech recognition, and machine translation. Several factors contributed to this resurgence:
-
Data: With the advent of the internet, there's been an explosion in the amount of data available. Neural networks, especially deep ones, perform better with more data.
-
Hardware/'compute': The rise of Graphics Processing Units (GPUs) for parallel computation made it feasible to train very large neural networks.
-
Software/algorithms: Techniques like dropout, batch normalisation, and advanced activation functions were developed, making it easier to train deeper networks.
Late 2010s to Present: Generative AI
- The late 2010s marked the rise of generative models within the deep learning community. Generative Adversarial Networks (GANs), Variational Autoencoders (VAEs), diffusion models and other generative techniques began to demonstrate the ability to create realistic images, sounds, and videos.
- OpenAI's GPT series and models like BERT showcased the power of generative models in the realm of natural language processing, producing human-like text and achieving state-of-the-art results in numerous NLP tasks.
- The potential of generative AI has expanded the horizons of what's possible in fields like art, music, entertainment, and design. AI-driven content creation tools, style transfer techniques, and deepfakes are notable outcomes of this era.
- The advancements in generative AI have also brought about new challenges and ethical considerations, particularly in areas like misinformation and the authenticity of digital content.
The Triple Exponential
The notion of the "triple exponential" in AI progress refers to the rapid growth in three key areas: data, hardware, and software. Each of these areas is experiencing exponential growth, and the combination of all three has catalysed the rapid advancement of AI in recent years:
-
Data: The digital age has led to an explosion in the amount of data available. Every online interaction, transaction, sensor reading, etc., generates data. The availability of big data is crucial for training sophisticated AI models, especially deep learning models that require vast amounts of labelled data. The exponential growth in data availability has been a key driver in the success of modern AI.
-
Hardware/'compute': This refers to the exponential growth in computational power. Moore's Law famously predicted that the number of transistors on a microchip would double approximately every two years, leading to an exponential increase in processing power. While the original formulation of Moore's Law has seen some challenges, other hardware innovations like specialised AI accelerators (e.g., TPUs or GPUs) and advancements in quantum computing continue to drive rapid growth in computational capabilities.
-
Software/algorithms: AI algorithms and models are becoming increasingly sophisticated. Deep learning, which was conceptualised decades ago, has seen a resurgence in the 2010s due to the availability of large datasets and powerful hardware. The exponential progress in software includes not only the development of new algorithms but also the refinement of existing ones to achieve better performance.
What is a Large Language Model (LLM)?
/images/image49.png)
What are Large Language Models (LLMs)?
Introduction to large language models
What are Large Language Models? | NVIDIA
Highly recommended:
Large Language Models from scratch
Large Language Models: Part 2
A Very Gentle Introduction to Large Language Models without the Hype | by Mark Riedl | Medium
LLMs
/images/image52.png)
Probably folks in this course are interested in the balance between quality, speed (output tokens/s) and context window.
From LLM Leaderboard | Artificial Analysis, sorting by Quality:
/images/image35.png)
Sorting by context window:
/images/image9.png)
The leaders in the field at this point are:
- OpenAI (o1 and GPT)
- Anthropic (Claude)
- Mistral (Large 2)
- Meta (Llama – the only open source model on this list, can be run locally)
- Google (Gemini)
/images/image32.png)
Official apps
- Different models
- Upload files
- Edit prompts
- Stop generating
- Regenerate
- Microphone
- Voice chat
- GPTs
- I have Memory and Web Browsing turned off
- Flash is extremely fast
- Use AI Studio for long context tasks
- Mistral Chat
-
Meta AI (Argentina, Australia, Cameroon, Canada, Chile, Colombia, Ecuador, Ghana, India, Jamaica, Malawi, Mexico, New Zealand, Nigeria, Pakistan, Peru, Singapore, South Africa, Uganda, United States, Zambia and Zimbabwe)
Alternatives to official apps
- OpenAI
- Anthropic
- Google Gemini
- Mistral
- (Groq, Perplexity, OpenRouter)
/images/image73.png)
Working with Markdown
/images/image57.png)
Typora is fantastic! Paste & 'Copy without theme styling'.
Running LLMs locally
/images/image60.png)
Prompt engineering
/images/image10.png)
Prompt Engineering 101 - Crash Course & Tips
Master the Perfect ChatGPT Prompt Formula (in just 8 minutes)!
Prompt Engineering Tutorial – Master ChatGPT and LLM Responses
The ULTIMATE Beginner's Guide to Prompt Engineering with GPT-4 | AI Core Skills
I Discovered The Perfect ChatGPT Prompt Formula
Prompt databases
'Hallucinations'/Fabrication/BS
/images/image54.png)
Recommended:
Context windows
/images/image70.png)
Google's NEW Gemini 1.5 Flash SHOCKS the Industry (2M context window!)
Notable LLM-based apps
NotebookLM
https://notebooklm.google/
/images/image13.png)
Personalized AI research assistant powered by Google's most capable model, Gemini 1.5 Pro
Perplexity
https://www.perplexity.ai/
/images/image42.png)
Poe
https://poe.com/
/images/image4.png)
Cursor
https://cursor.sh/
/images/image39.png)
/images/image47.png)
- Code in natural language!
- See also:
Lex
https://lex.page/
/images/image59.png)
- AI writing assistant ("Never write alone again")
- Good example of 'co-intelligence'
GPT for Work
https://gptforwork.com/
/images/image63.png)
- GPT for Sheets is particularly interesting, as an easy way of running many prompts in parallel
-
Install via https://gptforwork.com/
Getting an OpenAI API key
Getting an Anthropic API key
https://console.anthropic.com/
Bonus: Turning Kindle books into text with Calibre/DeDRM
- Go to Manage Your Content and Devices
/images/image37.png)
- Go to Books and select Download & transfer via USB
/images/image56.png)
- Select any device and click Download. The book should download to your computer.
/images/image23.png)
-
Install Calibre
- Install DeDRM
-
Download the zip file here
- Once downloaded, create a new folder and name it whatever you like
- Extract the zip file into that folder
- Go to Calibre, then Preferences > Advanced > Plugins > Load plugin from file > New folder you created > Select DeDRM_plugin.zip
- Plugin should successfully load into Calibre
- Add and convert books:
- Once downloaded, go to Calibre and select Add Books. Select the books you wish to convert into EPUBs/other formats and they should load onto Calibre
- Once downloaded, select the book(s) and press Convert Books
- When the new menu pops up, ensure the Output Format on the top right is what you require, and press OK
- Voila! It should remove the DRM from your Kindle book
(Last parts copied from this guide)
Week 1 recording
https://www.youtube.com/watch?v=OIkqbDHv3Fk
Week 2
AI image generation
The current leaders: FLUX, Ideogram, Midjourney, Playground
/images/image65.png)
Note how cheap Playground and FLUX.1 [schell] are
FLUX.1 [schell] is also blazing fast!
/images/image19.png)
Midjourney is the most expensive, and by far the slowest, and now not even the highest quality! However, it has by far the most sophisticated UI:
Midjourney's web editor
/images/image27.png)
I highly recommend reading/watching the web editor docs in full. See also the docs for character references, in particular note the cw parameter:
"Use the character weight parameter --cw to set the strength of characterization. --cw accepts values from 0 to 100. --cw 0 focuses on the character's face only. Higher values use the character's face, hair, and clothing. --cw 100 is default."
Other notable image tools
/images/image53.png)
AI voice
/images/image14.png)
AI video
"There are very different use cases for AI video generators. Let's break them down into 3 subcategories because it wouldn't be fair to compare a diffusion model to an AI avatar generator."
AI film
"Teaching the future of storytelling"
Video-to-video:
The Best AI Video Tools Compared: KLING vs LUMA vs Runway Gen 3
Kling vs. Runway Gen 3 vs. Luma Dream Machine vs. Pixverse
/images/image30.png)
AI music
/images/image1.png)
A.I. Sampling and how the Music Industry will change forever
Live from Latent Space (Album made with Google MusicLM)
Week 2 recording
https://www.youtube.com/watch?v=Lps8_ZWs9Ck
Week 3
fal.ai
Fal.ai, which hosts media-generating AI models, raises $23M from a16z and others | TechCrunch
Notable models:
Replicate
/images/image29.png)
ControlNet Revolutionized How We Use AI To Generate Images
NEW ControlNet for SDXL!
How ControlNet v1.1 Is Revolutionizing AI Art Even Further
Runway
It can be useful to get familiar with the language that is being used around AI image and video, so you know what to search for or ask for:
Week 3 recording
https://www.youtube.com/watch?v=P76qPC40MYQ
Week 4: AI Futures
Books
Key figures
See also The 100 Most Influential People in AI 2024 | TIME
Veterans:
"Godfather of AI" Geoffrey Hinton: The 60 Minutes Interview
Prof. Geoffrey Hinton - "Will digital intelligence replace biological intelligence?" Romanes Lecture
What happens when our computers get smarter than we are? | Nick Bostrom
The Impact of chatGPT talks (2023) - Prof. Max Tegmark (MIT)
'Doomers':
Will Superintelligent AI End the World? | Eliezer Yudkowsky | TED
AGI in sight | Connor Leahy, CEO of Conjecture | AI & DeepTech Summit | CogX Festival 2023
Skeptics:
The AI Bubble: Will It Burst, and What Comes After?
Why Big Tech is Ruining Our Lives with Brian Merchant - Factually! - 250
Bosses:
Unreasonably Effective AI with Demis Hassabis
- Dario Amodei (Anthropic)
- Sam Altman (OpenAI)
- Yann LeCun (Meta)
- Jensen Huang (Nvidia)
Organisations
Life 3.0 (Max Tegmark)
/images/image12.png)
Max Tegmark lecture on Life 3.0 – Being Human in the age of Artificial Intelligence
Life 3.0: Being Human in the Age of AI | Max Tegmark | Talks at Google
How to get empowered, not overpowered, by AI | Max Tegmark
Max Tegmark on Life 3.0: Being Human in the Age of Artificial Intelligence
On the Lex Fridman podcast:
Max Tegmark: Life 3.0 | Lex Fridman Podcast #1
Max Tegmark: AI and Physics | Lex Fridman Podcast #155
Max Tegmark: The Case for Halting AI Development | Lex Fridman Podcast #371
Glossary
-
Life 1.0 (biology): Biological life that evolves based on genetic information and cannot significantly redesign itself during its lifetime. Examples include bacteria.
-
Life 2.0 (culture): Life forms like humans that can learn and adapt, essentially having the ability to redesign their software (or neural patterns).
-
Life 3.0 (AI/biotech): A life form that can design not only its software (like Life 2.0) but also its hardware. In other words, these life forms would have the ability to drastically change or improve themselves both mentally and physically.
-
Intelligence: The ability to acquire and apply knowledge and skills, solve problems, and adapt to new situations. It encompasses reasoning, understanding, learning, and cognition.
-
Artificial Intelligence (AI): The simulation of intelligence in machines. It involves creating algorithms that allow computers to perform tasks that would typically require human intelligence. This includes tasks like visual perception, speech recognition, decision-making, and translation between languages.
-
Narrow Intelligence/Weak AI: An intelligent system designed and trained for a particular task. Examples include virtual personal assistants, such as Apple's Siri or image recognition software. They operate under a predefined set of rules and do not possess general problem-solving capabilities beyond their specific domain.
-
General Intelligence: Often associated with humans, this type of intelligence reflects the ability to understand, learn, and apply knowledge in multiple domains, reason through problems, and be adaptable to varied situations.
-
AGI (Artificial General Intelligence)/Strong AI: AI systems that possess (general) intelligence comparable to human beings, allowing them to perform any intellectual task that a human being can do.
-
ASI (Artificial Superintelligence, Superintelligence): An intellect that is much smarter than the best human brains in practically every field, including creativity, reasoning, and social skills.
Superintelligence: Science or Fiction? | Elon Musk & Other Great Minds (Elon Musk, Stuart Russell, Ray Kurzweil, Demis Hassabis, Sam Harris, Nick Bostrom, David Chalmers, Bart Selman, Jaan Tallinn)
-
Friendly AI: An AI that is designed to be beneficial to humanity and safe.
-
Utility function: A mathematical representation of what an agent (like an AI) wants or values. It is used to describe the agent's goal or objective.
-
Consciousness: The subjective experience of being. Tegmark discusses various perspectives on consciousness and its potential relevance to AI.
-
Technological unemployment: The phenomenon where automation and AI displace jobs faster than new jobs are created.
-
Existential risk: A risk that threatens the entire future of humanity.
-
Recursive self-improvement: The process by which an AI system could potentially improve its own architecture or algorithms, leading to rapid increases in intelligence.
-
Intelligence explosion: The idea that once we create an AI smarter than us, it might assist in building an even smarter AI, leading to a rapid chain reaction of increasing intelligence levels.
The intelligence explosion: Nick Bostrom on the future of AI
-
Singularity: A hypothetical point in the future when technological growth becomes uncontrollable and irreversible, leading to unforeseeable changes to human civilization. Often, the singularity is associated with the moment when an AI undergoes recursive self-improvement, rapidly advancing its own intelligence beyond human comprehension.
-
The Alignment Problem: The challenge of ensuring that the goals and behaviours of AI systems align with human values and intentions. As AI systems become more autonomous and potentially more powerful, there is a concern that they might act in ways that are not beneficial or even harmful to humanity. The alignment problem emphasises the need to develop AI in such a way that it understands and respects our values and does not inadvertently cause harm or operate in unintended ways.
/images/image17.png)
Summary
-
Definition of Life 3.0: Tegmark categorises life into three stages based on how it processes information. Life 1.0 (like bacteria) is life that evolves but doesn't learn during its lifetime. Life 2.0 (like humans) can learn and adapt but is limited by biological evolution. Life 3.0 can redesign not only its software (learning) but also its hardware (body), breaking free from evolutionary shackles.
-
The Promise and Peril of AI: Tegmark discusses the immense potential benefits of AI, such as curing diseases, solving complex problems, and ushering in an era of abundance. However, he also warns of the risks, including the possibility of AI surpassing human intelligence (superintelligent AI) and becoming uncontrollable.
-
Cosmic Perspective: He speculates on the broader cosmic implications of AI, suggesting that the transition from Life 2.0 to Life 3.0 may be a recurrent cosmic event, and that our universe might be filled with Life 3.0 civilizations.
-
AI Development Scenarios: Various scenarios are explored regarding how superintelligent AI might come about, including a slow development where society has time to prepare and a rapid development that could catch humanity off guard.
-
Values and Goals: One central concern is ensuring that superintelligent AI aligns with human values. Small misalignments could lead to unintended negative consequences. Tegmark discusses the challenge of defining these values and ensuring they are instilled in AI.
-
The Future of Jobs and Economy: The impact of AI on the job market is discussed, considering both the potential for job displacement and the emergence of new professions. The book also touches upon the implications for wealth distribution and possible solutions like universal basic income.
-
Consciousness and Identity: Tegmark delves into the philosophical questions of consciousness and identity in the age of AI, contemplating if machines could ever be conscious and what that means for our understanding of self.
-
The Role of Physics: Being a physicist, Tegmark relates the development of AI to the laws of physics, suggesting that understanding the cosmos can inform our understanding of intelligence and cognition.
-
Collective Decision Making: Tegmark argues that the future of AI should be decided collectively, emphasising the importance of global cooperation to navigate the challenges and capitalise on the opportunities of the AI revolution.
-
Call to Action: The book concludes with a call to action for researchers, policymakers, and the general public to engage in the conversation about the future of AI, ensuring that its development benefits all of humanity.
Aftermath scenarios
/images/image36.png)
Summary of AI risks (beyond alignment)
High level: The A.I. Dilemma - March 9, 2023
-
Lethal Autonomous Weapons (LAWs): LAWs can independently identify and attack targets without human direction. Their introduction into warfare and policing could lead to rapid escalation of conflicts and unintended civilian casualties. The ethical concerns surrounding the removal of human judgement from life-or-death situations make these weapons highly controversial.
Slaughterbots - if human: kill()
Slaughterbots
The ideas behind 'Slaughterbots - if human: kill()' | A deep dive interview
-
Surveillance Systems: AI-driven surveillance technology, such as facial recognition, enhances the ability of entities, especially governments, to monitor populations. This technology can be misused by authoritarian regimes to suppress opposition, monitor dissent, and violate privacy rights, leading to a dystopian loss of privacy and freedom.
Artificial intelligence study decodes brain activity into diaglogueAI Mind Reading Experiment!
-
Loss of Jobs: As AI and automation technologies advance, they threaten to displace human jobs in several sectors, from manufacturing to services. While new job roles might emerge, the transition could lead to economic hardships, societal unrest, and a need for new skills training and social safety nets.
Artificial Intelligence responsible for 5% of jobs lost in May
How AI Is Already Reshaping White-Collar Work | WSJ
Is AI coming for your job? | DW Business
-
Wealth Concentration: AI could empower a handful of mega-corporations with significant competitive advantages, leading to monopolistic behaviours, stifling innovation, and concentrating wealth and power, further widening societal inequalities.
Yanis Varoufakis: Welcome to the age of technofeudalism
How AI will make inequality worse
-
Algorithmic Bias: AI systems often learn from historical data. If this data contains societal biases, the AI will replicate or even amplify these biases, leading to unfair or discriminatory decisions. Such biases can manifest in various areas like hiring, lending, or law enforcement, and can perpetuate racial, gender, or socio-economic disparities.
How AI Image Generators Make Bias Worse
ChatGPT Has A Serious Problem
-
Malfunction or Unexpected Behaviours: Advanced AI models can sometimes act unpredictably, especially when confronted with situations they weren't trained on. Such erratic behaviours in critical areas like healthcare, transportation, or finance could have dire consequences.
The Most Unsettling Records From the AI Incident Database
Epic AI fails
Progress on interpretability:
-
Security Vulnerabilities: AI systems can become targets for cyberattacks, or even worse, be used by malicious actors to find and exploit vulnerabilities in other systems. With AI-driven cyber warfare, the scale, speed, and potential damage of attacks could be unprecedented.
Hacking with ChatGPT: Five A.I. Based Attacks for Offensive Security
-
Dependency: A societal over-reliance on AI, especially in sectors like energy, transportation, or healthcare, could be perilous. Any malfunction, deliberate shutdown, or cyberattack could lead to systemic collapses.
The Real Danger Of ChatGPT
Artificial Escalation
-
Misinformation & Propaganda: AI can be used to craft persuasive narratives or generate misleading content at scale, undermining truth and facilitating propaganda campaigns.
'Artificial Intelligence could pollute the world with misinformation'
What will the future of AI-powered disinformation look like? | The Stream
Fake image of explosion near Pentagon stirs concerns over artificial intelligence
AI's Disinformation Problem | AI IRL
-
Deepfakes: Deepfakes are the result of advanced AI models that manipulate media, often replacing one person's likeness with another. These AI-generated creations can be almost indistinguishable from genuine recordings. Their primary danger lies in spreading misinformation, creating fake evidence, or even blackmailing individuals. In the realm of politics, journalism, or legal proceedings, the presence of deepfakes can erode public trust and threaten democratic processes.
The Incredible Creativity of Deepfakes — and the Worrying Future of AI | Tom Graham | TED
Deep Fakes are About to Change Everything
Deepfake audio of Sir Keir Starmer released on first day of Labour conference
MrBeast and BBC stars used in deepfake scam videos - BBC News
-
AI Content Flood: As AI becomes more adept at generating content – from articles and videos to art and music – we are witnessing a surge in the volume of AI-created content. This influx has the potential to overwhelm traditional content, making it challenging for individuals to discern between human-created, genuine content and AI-generated, potentially inauthentic content.
AI Just Killed YouTube
-
Filter Bubbles & Echo Chambers: AI-driven platforms can trap users in information silos, exposing them only to similar viewpoints and reinforcing pre-existing beliefs. This can polarize societies and weaken the shared understanding of reality.
Beware online "filter bubbles" | Eli Pariser
How news feed algorithms supercharge confirmation bias | Eli Pariser | Big Think
-
Emotional & Psychological Impact: Relying on AI for companionship or replacing traditionally human roles can affect our emotional health, potentially diminishing genuine human interaction and altering our social fabric.
The Depressing Rise of AI Girlfriends
The Rise of A.I. Companions [Documentary]
-
AI, Copyright, and Intellectual Property: The rise of AI-generated content presents complex challenges to traditional notions of copyright and intellectual property (IP). When an AI creates a piece of music, a work of art, or a novel, who owns the rights to that work? Furthermore, AI can be used to replicate styles or mimic human creations, potentially infringing upon original works without directly copying them.
ChatGPT and Generative AI Are Hits! Can Copyright Law Stop Them?
Can artists protect their work from AI? – BBC News
AI-created artwork sparks copyright debate
A.I. Versus The Law
-
Environmental Impact: Training AI models, especially the larger ones, requires significant computational resources. This can have a sizable carbon footprint and further strain our planet's resources.
AI's hidden climate costs | About That
Peter Henderson: Environmental Impact of AI (and What Developers Can Do)
Statement on AI Risk
/images/image21.png)
Statement on AI Risk | CAIS
Elon Musk calls for artificial intelligence pause
AI pioneer calls to stop before it’s too late | Stuart Russell
AI 'godfather' quits Google over dangers of Artificial Intelligence - BBC News
Limitations of AI
/images/image20.png)
Papers:
AI and regeneration
AI and climate
/images/image41.png)
Can AI Help Solve the Climate Crisis? | Sims Witherspoon | TED
- Greenhouse Gas Emissions Monitoring
- Power Sector
- Manufacturing
- Materials Innovation
- Food Systems
- Road Transport
-
4 ways AI can help with climate change, from detecting methane to preventing fires (NPR, Jan 2024)
-
8 ways AI is helping tackle climate change (WEF, Jan 2024)
-
2024: Is this the year AI helps us fight climate change? (Sifted, Jan 2024)
-
The AI Revolution in Climate Science (Project Syndicate, Jan 2024)
-
Crowdsourcing AI Solutions for Climate Change: AI-Startup Winners at COP28 (AI for Good, Dec 2023)
-
Accelerating climate action with AI (Google, Nov 2023)
-
AI for Climate Action: Technology Mechanism supports transformational climate solutions (UNFCC, Nov 2023)
-
Explainer: How AI helps combat climate change (UN, Nov 2023)
-
Tackling climate change with machine learning (MIT Sloan, Oct 2023)
-
How To Fight Climate Change Using AI (Forbes, Jul 2022)
Key papers
/images/image45.png)
by Climate Change AI
Tackling Climate Change with Machine Learning - A Summary
/images/image51.png)
by Innovation for Cool Earth Forum
Artificial Intelligence for Climate Change Mitigation
Sounds good? We should also be aware of the Jevons Paradox…
Jevons Paradox & The Rebound Effect
AI & drones for reforestation
/images/image71.png)
Drones and AI team up to reforest Rio de Janeiro | Technology
These seed-firing drones plant thousands of trees each day | Pioneers for Our PlanetUsing Drones to Plant 20,000,000 Trees
Tree-Planting Drones 🌳🌱 | WWF-Australia
Startup's seed-dropping drones can plant 40,000 trees a day
AI for talking to animals and plants
/images/image50.png)
How artificial intelligence is helping scientists talk to animals - BBC News
Using AI to Decode Animal Communication with Aza Raskin
How Scientists Are Using AI Tech To Communicate With Animals
Podcasts:
AI to accelerate innovation
/images/image7.png)
How Google Solved Nuclear Fusion's Big Problem
AI Cracked the Code of Nuclear Fusion to Destroy Oil and Gas
Can AI solve nuclear fusion? | Demis Hassabis and Lex Fridman
Could AI discover new mathematics/physics?
/images/image67.png)
Multi-Agent Hide and Seek
AI to redesign economies
Design goals for a new economic system from Daniel Schmachtenberger's New Economics Series:
-
Lasting global peace – Conflicts prevented and solved non-violently when needed.
-
Thriving physical and psychological well-being for everyone. Robust health optimized, disease prevented, and where health issues do arise, they should be cured as completely as possible, as quickly as possible, addressing all causal dynamics, utilizing all the tools available, with minimum side effects.
-
A transparent, open, information sharing world. Where all the information that could empower people is readily available; all interests are aligned with what is true and systemically positive; disinformation is identified and discarded, etc. Choice making (governance) can only be as good as the relevant information fed into the process (sense-making). [Partial and/or corrupted information make good choicemaking impossible.]
-
Abundance of all meaningful goods and values for everyone in the system. Where scarcity is intentionally, progressively engineered out of the system as an essential design goal. Where economic valuation is rigorously connected to real value.
-
A thriving diverse ecology and biosphere. Where new products are made from old products, obsoleting waste and environmental damage from resource acquisition, in a closed loop, upcycling materials economy. With nutrient and microbiome rich soils. No industrial pollutants in the environment. Healthy coral, large fish populations, old growth forests, protected natural areas and nature integrated with the human built world, etc.
-
A system that supports the maximum freedom of individuals and encourages their unique self-actualization… while encouraging the greatest depth and breadth of interpersonal intimacy and synergy. All people having access to the best resources of health care, education, and creativity that are technologically possible. People incented to create and to support others to create…and to connect meaningfully with other humans…to appreciate the beauty of the world and to add beauty to it.
-
Good systems of choice making, not damaged by vested interest. Choices that require the participation of many people, and/or that will affect many people, that need maximum integrity and minimum bias. Processes for resolving conflicts that are structurally oriented to prefer optimal conflict resolution.
-
Anti-fragility and full richness of all complex systems: ecology, physiology, psychology, culture. Resilience, antifragility, health, and aliveness are proportional to self-organizing complexity. Both the safety and real value of a civilization depends on its alignment with these fundamental complex systems.
-
Antifragility in the presence of exponential technology. Developing the power of gods requires developing the wisdom and care of gods.
/images/image2.png)
How the CIA Destroyed the Socialist Internet: Cybersyn, Part 1 | Kernel Panic | MashableThe British Guru Who Wired Chile’s Cybernetic Socialism: Cybersyn, Part 2 | Kernel Panic| MashableHow an Insurrection Strangled Chile’s Digital Utopia: Cybersyn, Part 3 | Kernel Panic | MashableCybersocialism: Project Cybersyn & The CIA Coup in Chile (Full Documentary by Plastic Pills)
Hayek promoted markets over central planning on the basis that successful economies need to be run via some form of decentralised collective intelligence. At the time, the best (or least-worst) form of this decentralised collective intelligence was the market. In 2024, however, is it possible that AI and blockchain technology together constitute a new and improved form of economic decentralised collective intelligence, superior to the market mechanism - what some are calling third-wave economics, or Cybersyn 2.0?
Intelligence vs wisdom
What is the Philosophers Stone? Introduction to Alchemy - History of Alchemical Theory & Practice
"We believe Artificial Intelligence is our alchemy, our Philosopher’s Stone – we are literally making sand think."
– The Techno-Optimist Manifesto
From Sand To Silicon: The Making of a Chip | Intel
How To: Turn Sand Into Silicon Chips
"The philosopher's stone is a mythic alchemical substance capable of turning base metals such as mercury into gold or silver. It is also called the elixir of life, useful for rejuvenation and for achieving immortality; for many centuries, it was the most sought-after goal in alchemy."
/images/image55.png)
So You Want to Be a Sorcerer in the Age of Mythic Powers... (The AI Episode) - The Emerald
Josh Schrei "Mythic Powers in the Age of AI"
"The rise of Artificial Intelligence has generated a rush of conversation about benefits and risks, about sentience and intelligence, and about the need for ethics and regulatory measures. Yet it may be that the only way to truly understand the implications of AI — the powers, the potential consequences, and the protocols for dealing with world-altering technologies — is to speak mythically…"
Wise AI:
"It’s up to liberal democracies to demonstrate institutional co-evolution as a third-way between degenerate anarchy and an AI Leviathan."
…similar to Daniel Schmachtenberger's Third Attractor framework:
In Search of the Third Attractor, Daniel Schmachtenberger (part 1)
In Search of the Third Attractor, Daniel Schmachtenberger (part 2)
AI & Moloch: We already have a misaligned superintelligence, it's called humanity
Daniel Schmachtenberger | Misalignment, AI & Moloch | Win-Win with Liv Boeree
Daniel Schmachtenberger: "Artificial Intelligence and The Superorganism" | The Great Simplification
Who is Moloch and What is the MetaCrisis?
AI, Moloch and the Genie's Lamp
Sorya Forall:
Buddhism in the Age of AI - Soryu Forall
AI & The State of the World | Soryu Forall | Ep. 1
AI & The Extinction of Humanity | Soryu Forall | Ep. 2
Other wise voices:
EP 181 Forrest Landry Part 1: AI Risk
EP 183 Forrest Landry Part 2: AI Risk
AI: The Coming Thresholds and The Path We Must Take | Internationally Acclaimed Cognitive Scientist
The Soul of AI (Ep. 5: John Vervaeke)
Questioning technology itself:
Refrain: The best case
/images/image15.png)
"If we can safely harness the power of AI for human betterment,
then we can paint a utopian future our ancestors could hardly fathom.
A future free of disease and hunger,
where biotechnology has stabilised the climate and biodiversity.
Where abundant clean energy is developed in concert with AI;
Where breakthroughs in rocketry and materials sciences
have propelled humans to distant planets and moons;
And where new tools for artistic and musical expression
open new frontiers of beauty, experience, and understanding."
– THE HUMAN FUTURE: A Case for Optimism
Week 4 recording
https://www.youtube.com/watch?v=Y37hU2j2N5A
That's a wrap!
Thank you for participating in the Practical AI course!
You will receive a feedback form tomorrow morning. Your feedback is much appreciated (the more honest and detailed the better).
Feel free to subscribe to my newsletter to hear about future courses and events: http://stephenreid.substack.com/
Best wishes,
Stephen