Introduction to AI
(Jan 2024)
by Stephen Reid
AI (Artificial Intelligence), ML (Machine Learning) and DL (Deep Learning)
Recommended newsletters and podcasts
Classic 1-feature binary classification
Why are we so interested in classification?
Classic 2-feature binary classification
2-feature classification using a single perceptron
Geometrical interpretation of the update rule
Non-linear activation functions
Backpropagation/Gradient descent
Universal Approximation Theorem
An example of an artificial neural network
Session 2: Large Language Models (LLMs)
What is a Large Language Model (LLM)?
Tokenization, sequencing and padding
A simple neural network for next word prediction
'Hallucinations'/Fabrication/BS
Beyond context windows: Retrieval Augmented Generation
Microsoft Copilot & Google Duet AI
DALLE-3: no more prompt engineering?
Summary of AI risks (beyond alignment)
AI for talking to animals and plants
So You Want to Be a Sorcerer in the Age of Mythic Powers?
Thank you for registering for Introduction to AI! I'm honoured you've chosen me as your guide to lead you towards this fascinating frontier 🦾
A reminder: You will be sent pre-reading/watching a few days before each session (allow 1 hour), and homework at the end of each session (allow 1 hour). To get the most out of the course you should also set aside at least 2 hours per session to go back through the notes, read some of the links provided and make sure you understand everything we've covered. So that's a minimum of 6 hours per session for maximum benefit (2 hours live, 1 hour pre-reading/watching, 2 hours revision, 1 hour homework). If you are an absolute beginner it is particularly important to do all the above.
Recordings will be provided shortly after each live session, and every participant will be offered a short private consultation during the course (more on that in a future session).
Please try to watch the following before our first session:
Optional:
The course notes are accessible at https://tinyurl.com/intro-to-ai-jan-2024 (same link for all sessions). For course discussion, you are welcome to join the Telegram group via https://t.me/+oG8d0UVr-IdkZjBk. You can also leave comments in this doc.
I look forward to seeing you at 4pm UTC on Thurs 11th Jan at https://us06web.zoom.us/j/87879903601 (same Zoom link for all sessions).
Best,
Stephen
Please note, you don't need to study this section beforehand (though you are welcome to) – I will be going over it in the live session. It is also subject to change up until the session.
You can use archive.is to read Medium articles if you don't have a Medium account.
– Seema Singh, Towards Data Science
– Audrey Lorberfeld, Towards Data Science
-> Deep Unsupervised Learning, Deep Supervised Learning, Deep Reinforcement Learning
What is Deep Learning? - MachineLearningMastery.com
AI already beats the average human at a number of key tasks…
…and is expected to surpass humans on a bunch more over the coming years
"With so many new AI newsletters popping up, it can be overwhelming to figure out which you'd actually enjoy reading. To choose the right AI newsletter, consider your level of expertise in the field. If you're a beginner, AI Breakfast. If you know a bit more about tech, you might want a newsletter that talks about daily software and tools you could implement today, if so – Ben's Bites. And lastly, if you want a newsletter that dives deep into technical discussions and emerging research you should go with The Batch. You really won't go wrong with any of them, they're all extremely interesting reads about technology that is actively changing the world."
"If we can safely harness the power of AI for human betterment,
then we can paint a utopian future our ancestors could hardly fathom.
A future free of disease and hunger,
where biotechnology has stabilised the climate and biodiversity.
Where abundant clean energy is developed in concert with AI;
Where breakthroughs in rocketry and materials sciences
have propelled humans to distant planets and moons;
And where new tools for artistic and musical expression
open new frontiers of beauty, experience, and understanding."
– THE HUMAN FUTURE: A Case for Optimism
Reducing toil:
Catalysing creativity:
Promoting health:
Improving education:
– A brief history of AI - Raconteur
1950s and 1960s: Birth and Initial Excitement
Photo of Frank Rosenblatt from Professor’s perceptron paved the way for AI – 60 years too soon
Perceptron Research from the 50's & 60's, clip
1970s: First Winter
“What Rosenblatt wanted was to show the machine objects and have it recognize those objects. And 60 years later, that’s what we’re finally able to do,” Joachims said. “So he was heading on the right track, he just needed to do it a million times over. At the time, he didn’t know how to train networks with multiple layers. But in hindsight, his algorithm is still fundamental to how we’re training deep networks today… He lifted the veil and enabled us to examine all these possibilities, and see that ideas like this were within human grasp.”
1980s: Revival with Backpropagation
1990s: Second Winter
– What Was Actually Wrong With Backpropagation in 1986?, slide by Geoffrey Hinton
2010s: Deep Learning Revolution
Late 2010s to Present: Generative AI
The notion of the "triple exponential" in AI progress refers to the rapid growth in three key areas: data, hardware, and software. Each of these areas is experiencing exponential growth, and the combination of all three has catalysed the rapid advancement of AI in recent years. Let's break down each component:
Kaggle notebook for these 3 sections
The Main Ideas of Fitting a Line to Data (The Main Ideas of Least Squares and Linear Regression.)
Note: don't do this 🙃 Use logistic regression instead:
Because it turns out both text and image generation can be understood as classification problems 🤯 🥳
For text generation, given a vocabulary of size V, predicting the next word or character in a sequence can be viewed as a classification problem with V classes. This is, in fact, how many language models work. Given a sequence of words or characters, the model predicts the probability distribution over the vocabulary for the next word or character.
(Simpler version: When a computer tries to write text, it's like playing a guessing game. Imagine you have a set of words or letters. After reading some text, the computer tries to guess the next word or letter from this set. This guessing is similar to picking the right answer from multiple choices. Many computer programs that write text use this approach. They look at the words already written and then predict the next one.)
For image generation, if you discretize the pixel values, you can treat the problem as predicting the value of each pixel given the values of previous pixels. For example, if you're generating a grayscale image and you discretize each pixel value into 256 values (0-255), generating an image becomes a series of classification problems where the task is to predict the value of the next pixel from 256 possible classes.
(Simpler version: Creating an image on a computer is like colouring a picture dot by dot. If the picture is black and white, each dot can be any shade from pure black to pure white. When a computer makes an image, it's trying to guess the right shade for each dot. It's like picking the correct colour from a palette of 256 shades. The computer looks at the shades it has already chosen and then tries to decide the best shade for the next dot.)
"The idea behind perceptrons (the predecessors to artificial neurons) is that it is possible to mimic certain parts of neurons, such as dendrites, cell bodies and axons using simplified mathematical models of what limited knowledge we have on their inner workings: signals can be received from dendrites, and sent down the axon once enough signals were received. This outgoing signal can then be used as another input for other neurons, repeating the process. Some signals are more important than others and can trigger some neurons to fire easier. Connections can become stronger or weaker, new connections can appear while others can cease to exist. We can mimic most of this process by coming up with a function that receives a list of weighted input signals and outputs some kind of signal if the sum of these weighted inputs reach a certain bias. Note that this simplified model does not mimic neither the creation nor the destruction of connections (dendrites or axons) between neurons, and ignores signal timing. However, this restricted model alone is powerful enough to work with simple classification tasks."
“We are not interested in the fact that the brain has the consistency of cold porridge.”
– Alan Turing, 1952
Kaggle notebook for this section
"We take a weighted sum of the inputs, and set the output as one only when the sum is more than an arbitrary threshold (theta, θ). However, according to the convention, instead of hand coding the thresholding parameter thetha, we add it as one of the inputs, with the weight -theta as shown below, which makes it learnable."
w = np.random.rand(3) # initialize weights randomly for i in range(X.shape[0]): # activation was too weak, increase weights # activation was too strong, decrease weights |
binary classification |
multiclass classification |
|
linearly separable |
single perceptron |
parallel perceptrons |
non-linearly separable |
hidden layers, almost always with non-linear activation functions (though there's a clever multilayer linear solution to the famous XOR problem) |
parallel perceptrons with hidden layers (again, almost always with non-linear activation) |
The vast majority of real world problems are non-linearly separable, so we need hidden layers with non-linear activation functions.
One of the most effective ways to allow the classification of non-linearly separable data is to introduce one or more hidden layers between the input and output layers. This architecture can approximate any continuous function to an arbitrary degree of accuracy, given a sufficient number of hidden units (a result known as the universal approximation theorem). However, while adding hidden layers makes the model more expressive, it also makes it harder to train.
The basic perceptron uses a step function, which not suitable for multi-layer training. To enable the model to capture non-linearities, we can use non-linear activation functions like the sigmoid, hyperbolic tangent (tanh), or Rectified Linear Unit (ReLU).
With the introduction of hidden layers, the weights can't be updated using the simple perceptron update rule. Instead, we use the backpropagation algorithm, which is a way to compute the gradient of the loss function with respect to each weight by applying the chain rule. This gradient information is then used to update the weights using gradient descent or its variants.
No matter how wiggly or complex a decision boundary might be in a dataset, the Universal Approximation Theorem assures us that a neural network even with a single layer can approximate it to any desired level of accuracy.
Let's see the Universal Approximation Theorem in action!
Kaggle notebook for these 2 sections
Single layer of 1000 neurons: 4,001 trainable params
3 layers of 10 neurons: 261 trainable params, same accuracy!
Why use multiple layers?
https://www.youtube.com/watch?v=cbqfHPa6X5Y
Previous course: https://www.youtube.com/watch?v=Vm-G263_wJU
Essential:
Advanced, optional:
What are Large Language Models (LLMs)?
Introduction to large language models
What are Large Language Models? | NVIDIA
Highly recommended:
Large Language Models from scratch
A Very Gentle Introduction to Large Language Models without the Hype | by Mark Riedl | Medium
In practice, an LLM is a language model based on the Transformer architecture, which itself is based on the Attention mechanism
Natural Language Processing - Tokenization (NLP Zero to Hero - Part 1)
Sequencing - Turning sentences into data (NLP Zero to Hero - Part 2)
ChatGPT has Never Seen a SINGLE Word (Despite Reading Most of The Internet). Meet LLM Tokenizers.
Kaggle notebook: a neural network for next word prediction
😲: I almost wish I hadn't gone down that rabbit-hole--and yet--and yet--it's rather curious, you know, this sort of life!
🤖: [12, 542, 173, 12, 481, 387, 42, 16, 2876, 2877, 2878, 145, 2879, 13, 488, 37, 205, 5, 2880, 7]
Previously, we considered a neural network with 2 features and 2 classes
Here, we have ~100 features (number of words in the longest sentence) and ~5000 classes (number of distinct words), but the principle is the same – we're using a neural network as a classifier.
Conclusion from the notebook: our model kind of sucks.
Perhaps we just need more training data?
It turns out things don't get much better with more data when using simple (feedforward) neural networks (or even recurrent neural networks, RNNs, a type of neural network with some memory of previous inputs). Enter…
Sep 2014: Bahdanau introduces the attention mechanism for recurrent neural networks
June 2017: "We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence [...] entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train."
Transformer = Feed-forward neural network + Attention mechanism
The two key advantages of Transformers over recurrent neural networks (RNNs):
Simpler:
Transformers, explained: Understand the model behind GPT, BERT, and T5
More complex:
Visual Guide to Transformer Neural Networks - (Episode 2) Multi-Head & Self-Attention
Let's build GPT: from scratch, in code, spelled out. (3.1m views)
Andrej Karpathy: Tesla AI, Self-Driving, Optimus, Aliens, and AGI | Lex Fridman Podcast #333
The Neuroscience of “Attention”
"[Attention] allows you to look at the totality of a sentence, the Gesamtbedeutung [overall meaning] as the Germans might say, to make connections between any particular word and its relevant context… Attention allows you to travel through wormholes of syntax to identify relationships with other words that are far away — all the while ignoring other words that just don’t have much bearing on whatever word you’re trying to make a prediction about."
– A Beginner's Guide to Attention Mechanisms and Memory Networks | Pathmind
Why 'attention'?
"Take these two sentences: “Server, can I have the check?” & “Looks like I just crashed the server.” The word server here means two very different things, which we humans can easily disambiguate by looking at surrounding words. Self-attention allows a neural network to understand a word in the context of the words around it. So when a model processes the word “server” in the first sentence, it might be “attending” to the word “check,” which helps disambiguate a human server from a metal one. In the second sentence, the model might attend to the word “crashed” to determine this “server” refers to a machine."
– Transformers, Explained: Understand the Model Behind GPT-3, BERT, and T5
If you want to go deeper, check out these excellent videos from Serrano Academy:
The Attention Mechanism in Large Language Models
The math behind Attention: Keys, Queries, and Values matrices
Simpler:
More complex:
Ultimate Open-Source LLM Showdown (6 Models Tested) - Surprising Results!
Prompt Engineering 101 - Crash Course & Tips
Master the Perfect ChatGPT Prompt Formula (in just 8 minutes)!
Prompt Engineering Tutorial – Master ChatGPT and LLM Responses
The ULTIMATE Beginner's Guide to Prompt Engineering with GPT-4 | AI Core Skills
I Discovered The Perfect ChatGPT Prompt Formula
Prompt databases
Recommended:
Claude's 100K Token Context Window is INSANE!
Anthropic’s new 100K context window model is insane!
What is Retrieval-Augmented Generation (RAG)?
ChatGPT + Web Browsing Just Changed Everything!
ChatGPT Can Now Access the Internet - Top 10 prompts for ChatGPT Browse with Bing
ChatGPT Update: Web Browsing is BACK! (6 New Use Cases) 🌐
Is The New Web Browsing in ChatGPT Any Good?
The Ultimate Guide To ChatGPT Custom Instructions
ChatGPT Custom Instructions - Huge ChatGPT update
I Discovered The Ultimate ChatGPT Prompt Formula (Custom Instructions Explained)
ChatGPT Update: Custom Instructions in ChatGPT! (Full Guide)
ChatGPT Can Now Have Complete Voice Conversations - Talk to ChatGPT
How to Enable ChatGPT Voice to Voice on Phone (iPhone & Android) Talk to ChatGPT!
Master These 26 ChatGPT Plugins to Stay Ahead of 97% of People
Top 10 ChatGPT Plugins You Can't Miss
I Tried All New ChatGPT Plugins... And here is the BEST!
I Tried All 757 ChatGPT Plugins, These are the 6 You Need To Know
Formerly Advanced Data Analytics and Code Intepreter
Become a Data Analyst using ChatGPT! (Full Guide)
This ChatGPT Change is HUGE for Data Analysts
ChatGPT Code Interpreter Tutorial - New Open AI GPT Model!
ChatGPT Code Interpreter - Complete Tutorial including Prompt List
ChatGPT Code Interpreter AMAZING Example Uses!
ChatGPT Code Interpreter - The Biggest Update EVER!
Top 10 ways to use ChatGPT Code Interpreter
ChatGPT just leveled up big time...
ChatGPT Advanced Data Analysis - A New Era of Data Science Begins
Perplexity:
Google:
Bing/OpenAI:
Summarisation:
ChatGPT in Google Sheets: a beginner's guide (101)
Optional:
https://www.youtube.com/watch?v=V2A9XtB288M
Previous course: https://www.youtube.com/watch?v=tmJ-kC8vjV8
Advanced, optional:
OpenAI:
OpenAI’s ChatGPT Has Been Supercharged!
First Look At GPT-4 With Vision
Google:
Google Gemini Shocks The World and Might Be The ChatGPT-4 Killer
Gemini: Google's Latest AI Challenging GPT-4
Meta:
AnyMal: Meta's New Multimodal Genius Surpassing GPT-4
Open source:
LLAVA: The AI That Microsoft Didn't Want You to Know About!
“LLAMA2 supercharged with vision & hearing?!” | Multimodal 101 tutorial
Introducing Copilot Pro: Supercharge your Creativity and Productivity
Microsoft Copilot Pro - Everything You Need to Know
Microsoft Copilot - Excel has forever changed
Top things I've learned using Microsoft 365 Copilot | Demo
A new era for AI and Google Workspace
AI in Google Docs and Gmail IS HERE!
Duet AI The Future of Work is Already Here
Duet AI: Everything YOU NEED to Know
Duet AI for Google Workspace: Generative AI tools to transform work for the better
Stable Diffusion SDXL 1.0 Released! | Automatic1111 WebUI
OpenAI's DALL-E 3 - The King Is Back!
V6 is FINALLY HERE - Midjourney V6 FULL BREAKDOWN
Comparisons:
Midjourney V6 VS DALL•E 3: Prompt Battle & Full Review
Which is better? Midjourney v6 vs. DALL-E 3 vs. Stable Diffusion XL
Best AI Image? Midjourney V6 vs DALL E 3 vs Stable Diffusion
Coming soon:
Google’s Parti AI: Magical Results! 💫
Meta's CRAZY New AI Image Creator: "CM3leon"
ControlNet Revolutionized How We Use AI To Generate Images
How ControlNet v1.1 Is Revolutionizing AI Art Even Further
Dall-E 3 + ChatGPT Smokes Stable Diffusion & Midjourney! No More Prompt Engineering Needed
Prompt engineering:
V6 Prompt Design - Midjourney Beginner Tutorial
Advanced Midjourney V6 Guide (Pushing Boundaries of Lifelike Cinematic AI Photography)
Think Like an AI & Prompt Better in Midjourney v6
Write Prompts like THIS for Success in Midjourney V6
Face swapping:
Using GPT to enhance Midjourney prompts:
Turn ChatGPT into a Powerful Midjourney Prompt Machine
Text to Image in 5 minutes: Parti, Dall-E 2, Imagen
Text to Image: Part 2 -- how image diffusion works in 5 minutes
How AI Image Generators Work (Stable Diffusion / Dall-E) - Computerphile
Text-to-image generation explained
I Tested 7 AI Video Generators.. Here's The BEST!
I Tried 5 Text-to-Video AI Generators (Here's the best one)
I Tried 5 AI Video Generators for Faceless Channels (Here’s the BEST!)
This Is THE BEST AI Video Generator To Create Faceless YouTube Videos (2024 Update)
I Tried 5 of The Best AI Text-To-Video Generators & Editing Tools of 2024... (Are They Any Good?)
3 Incredible Text-to-Video AI Generators You Have To Try!
New AI Video Generator Does Prompt to YouTube Video
This AI Tool Creates Videos in Seconds! (No Editing)
Text-To-Film: 15-Minute Video From One Prompt!
HeyGen AI Translation Can Translate Video into ANY Language!
Star Wars by Wes Anderson Trailer | The Galactic Menagerie
Lord of the Rings by Wes Anderson Trailer | The Whimsical Fellowship
The forgotten punk pioneer (An A.I. mockumentary)
r/aivideo - The place for AI generated videos on reddit
The World's First AI Filmmaking Course — Curious Refuge
Going Viral: Behind the AI-Generated Wes Anderson Trailers for Star Wars and LOTR
All-In Summit: AI film and the generative art revolution with Caleb Ward
Animating images:
Animate MidJourney Images - Full AI Animation Workflow.
How To Animate A MidJourney Image (For Free)
New A.I Mode: Create Animations From A Single Image!
Best AI Animation Tutorial - FREE Options | Step-by-Step (Ghibli Studio Inspired)
10 Free AI Animation Tools: Bring Images to Life
Top 7 Image To Video AI Tools: Create AI Animation For FREE
Create Cinematic AI Videos with Runway Gen-2
Create Cinematic AI Videos with Pika Labs
Mind-Blowing New AI Video Generator: Text to Video AND Image to Video with Pika Labs
This Free AI Video Generator Hits Different
The Most Realistic AI Video Tool Yet!
This Free AI Video Generator is Wild!
Zeroscope Text2Video is now BETTER than RunwayML Gen2 (FREE)
How To Make Cool AI Videos (Step-By-Step)
Deforum AI Full Tutorial: Text To Video Animation
Create Amazing Videos With AI (Deforum Deep-Dive)
A.I. Sampling and how the Music Industry will change forever
Make a HIT Song and Music Video with AI (for Free)
How AI might make a lot of musicians irrelevant
The AI Effect: A New Era in Music and Its Unintended Consequences
Music revolution: how AI could change the industry forever
Suno:
Suno AI: Generative Music Is HERE
How to Make a FULL Song with Suno AI
It's Over - The Machines are Here - Suno.ai
MusicLM:
Live from Latent Space (Album made with Google MusicLM)
Google's MusicLM: Text Generated Music & It's Absurdly Good
Production tools:
The Best A.I. Production Tools For Music Makers! (2023)
The Best A.I. Production Tools For Music Makers PT.2! (2023)
The 4 Best AI Music Production Tools Right Now
The Best A.I. Tools for Music Producers | Artificial Intelligence
These A.I Tools Will Change How Music Is Made FOREVER
AI-Generated Music Vocals Are Crazy (New Tech)
https://www.youtube.com/watch?v=X8OGwy85YVE
Previous course: https://www.youtube.com/watch?v=ke0hdyC-8E0
Optional:
Recent expert voices:
"Godfather of AI" Geoffrey Hinton: The 60 Minutes Interview
“Godfather of AI” Geoffrey Hinton Warns of the “Existential Threat” of AI | Amanpour and Company
What was 60 Minutes thinking, in that interview with Geoff Hinton?
EMERGENCY EPISODE: Ex-Google Officer Finally Speaks Out On The Dangers Of AI! - Mo Gawdat | E252
Open AI Founder on Artificial Intelligence's Future | Exponentially
AI and the future of humanity | Yuval Noah Harari at the Frontiers Forum
Can we build AI without losing control over it? | Sam Harris
The danger of AI is weirder than you think | Janelle Shane
Will Superintelligent AI End the World? | Eliezer Yudkowsky | TED
The Urgent Risks of Runaway AI – and What to Do about Them | Gary Marcus | TED
What happens when our computers get smarter than we are? | Nick Bostrom
AGI in sight | Connor Leahy, CEO of Conjecture | AI & DeepTech Summit | CogX Festival 2023
The A.I. Dilemma - March 9, 2023
Douglas Rushkoff: I Will Not Be Autotuned - Crashing Technosolutionism
The Impact of chatGPT talks (2023) - Prof. Max Tegmark (MIT)
Recent commentary:
Artificial Intelligence: Last Week Tonight with John Oliver (HBO)
AI: Does artificial intelligence threaten our human identity? | DW Documentary
Max Tegmark lecture on Life 3.0 – Being Human in the age of Artificial Intelligence
Life 3.0: Being Human in the Age of AI | Max Tegmark | Talks at Google
How to get empowered, not overpowered, by AI | Max Tegmark
Max Tegmark on Life 3.0: Being Human in the Age of Artificial Intelligence
On the Lex Fridman podcast:
Max Tegmark: Life 3.0 | Lex Fridman Podcast #1
Max Tegmark: AI and Physics | Lex Fridman Podcast #155
Max Tegmark: The Case for Halting AI Development | Lex Fridman Podcast #371
Superintelligence: Science or Fiction? | Elon Musk & Other Great Minds (Elon Musk, Stuart Russell, Ray Kurzweil, Demis Hassabis, Sam Harris, Nick Bostrom, David Chalmers, Bart Selman, Jaan Tallinn)
The intelligence explosion: Nick Bostrom on the future of AI
Slaughterbots - if human: kill()
The ideas behind 'Slaughterbots - if human: kill()' | A deep dive interview
Artificial intelligence study decodes brain activity into diaglogueAI Mind Reading Experiment!
Artificial Intelligence responsible for 5% of jobs lost in May
How AI Is Already Reshaping White-Collar Work | WSJ
Is AI coming for your job? | DW Business
Yanis Varoufakis: Welcome to the age of technofeudalism
How AI will make inequality worse
How AI Image Generators Make Bias Worse
ChatGPT Has A Serious Problem
The Most Unsettling Records From the AI Incident Database
Epic AI fails
Progress on interpretability:
Hacking with ChatGPT: Five A.I. Based Attacks for Offensive Security
The Incredible Creativity of Deepfakes — and the Worrying Future of AI | Tom Graham | TED
Deep Fakes are About to Change Everything
Deepfake audio of Sir Keir Starmer released on first day of Labour conference
MrBeast and BBC stars used in deepfake scam videos - BBC News
Beware online "filter bubbles" | Eli Pariser
How news feed algorithms supercharge confirmation bias | Eli Pariser | Big Think
The Depressing Rise of AI Girlfriends
The Rise of A.I. Companions [Documentary]
ChatGPT and Generative AI Are Hits! Can Copyright Law Stop Them?
Can artists protect their work from AI? – BBC News
AI-created artwork sparks copyright debate
AI's hidden climate costs | About That
Peter Henderson: Environmental Impact of AI (and What Developers Can Do)
Elon Musk calls for artificial intelligence pause
AI pioneer calls to stop before it’s too late | Stuart Russell
AI 'godfather' quits Google over dangers of Artificial Intelligence - BBC News
Papers:
Can AI Help Solve the Climate Crisis? | Sims Witherspoon | TED
Key papers
Tackling Climate Change with Machine Learning - A Summary
by Innovation for Cool Earth Forum
Artificial Intelligence for Climate Change Mitigation
Drones and AI team up to reforest Rio de Janeiro | Technology
These seed-firing drones plant thousands of trees each day | Pioneers for Our PlanetUsing Drones to Plant 20,000,000 Trees
Tree-Planting Drones 🌳🌱 | WWF-Australia
Startup's seed-dropping drones can plant 40,000 trees a day
How artificial intelligence is helping scientists talk to animals - BBC News
Using AI to Decode Animal Communication with Aza Raskin
How Scientists Are Using AI Tech To Communicate With Animals
Podcasts:
How Google Solved Nuclear Fusion's Big Problem
AI Cracked the Code of Nuclear Fusion to Destroy Oil and Gas
Can AI solve nuclear fusion? | Demis Hassabis and Lex Fridman
Could AI discover new mathematics/physics?
Design goals for a new economic system from Daniel Schmachtenberger's New Economics Series:
How the CIA Destroyed the Socialist Internet: Cybersyn, Part 1 | Kernel Panic | MashableThe British Guru Who Wired Chile’s Cybernetic Socialism: Cybersyn, Part 2 | Kernel Panic| MashableHow an Insurrection Strangled Chile’s Digital Utopia: Cybersyn, Part 3 | Kernel Panic | MashableCybersocialism: Project Cybersyn & The CIA Coup in Chile (Full Documentary by Plastic Pills)
Hayek promoted markets over central planning on the basis that successful economies need to be run via some form of decentralised collective intelligence. At the time, the best (or least-worst) form of this decentralised collective intelligence was the market. In 2024, however, is it possible that AI and blockchain technology together constitute a new and improved form of economic decentralised collective intelligence, superior to the market mechanism - what some are calling third-wave economics, or Cybersyn 2.0?
What is the Philosophers Stone? Introduction to Alchemy - History of Alchemical Theory & Practice
"We believe Artificial Intelligence is our alchemy, our Philosopher’s Stone – we are literally making sand think."
– The Techno-Optimist Manifesto
From Sand To Silicon: The Making of a Chip | Intel
How To: Turn Sand Into Silicon Chips
"The philosopher's stone is a mythic alchemical substance capable of turning base metals such as mercury into gold or silver. It is also called the elixir of life, useful for rejuvenation and for achieving immortality; for many centuries, it was the most sought-after goal in alchemy."
So You Want to Be a Sorcerer in the Age of Mythic Powers... (The AI Episode) - The Emerald
Josh Schrei "Mythic Powers in the Age of AI"
"The rise of Artificial Intelligence has generated a rush of conversation about benefits and risks, about sentience and intelligence, and about the need for ethics and regulatory measures. Yet it may be that the only way to truly understand the implications of AI — the powers, the potential consequences, and the protocols for dealing with world-altering technologies — is to speak mythically…"
Wise AI:
"It’s up to liberal democracies to demonstrate institutional co-evolution as a third-way between degenerate anarchy and an AI Leviathan."
…similar to Daniel Schmachtenberger's Third Attractor framework:
In Search of the Third Attractor, Daniel Schmachtenberger (part 1)
In Search of the Third Attractor, Daniel Schmachtenberger (part 2)
AI & Moloch: We already have a misaligned superintelligence, it's called humanity
Daniel Schmachtenberger | Misalignment, AI & Moloch | Win-Win with Liv Boeree
Daniel Schmachtenberger: "Artificial Intelligence and The Superorganism" | The Great Simplification
Who is Moloch and What is the MetaCrisis?
AI, Moloch and the Genie's Lamp
Other wise voices:
EP 181 Forrest Landry Part 1: AI Risk
EP 183 Forrest Landry Part 2: AI Risk
AI: The Coming Thresholds and The Path We Must Take | Internationally Acclaimed Cognitive Scientist
The Soul of AI (Ep. 5: John Vervaeke)
Buddhism in the Age of AI - Soryu Forall
Questioning technology itself:
"If we can safely harness the power of AI for human betterment,
then we can paint a utopian future our ancestors could hardly fathom.
A future free of disease and hunger,
where biotechnology has stabilised the climate and biodiversity.
Where abundant clean energy is developed in concert with AI;
Where breakthroughs in rocketry and materials sciences
have propelled humans to distant planets and moons;
And where new tools for artistic and musical expression
open new frontiers of beauty, experience, and understanding."
– THE HUMAN FUTURE: A Case for Optimism
Thank you for participating in the Introduction to AI course!
You will receive a feedback form tomorrow morning. Your feedback is much appreciated (the more honest and detailed the better).
Feel free to subscribe to my newsletter to hear about future courses and events: http://stephenreid.substack.com/
Best wishes,
Stephen
https://www.youtube.com/watch?v=XjgdgNZ75p8
Previous course: https://www.youtube.com/watch?v=9xiBHq9B0SI