Inducing a Stroke in ChatGPT… Could AI help Unlock the Mysteries of the Human Brain?
Pictured above: Midjourney Prompt: android with an exposed fractal brain having a stroke. glowing. scientific. side view –v 4
One of the fascinating things about language models like ChatGPT is how they encode information.
You can delve into this in this prior article where we explore the internals of ChatGPT.
Two important takeaways from that post:
- ChatGPT’s implementation is designed to reflect the human brain. With a simulated artificial neural network with 175B connections as a major component of its architecture, ChatGPT is designed to behave similarly to the ‘wet’ neural network inside your skull (yes, there are still many differences)
- ChatGPT doesn’t store words like a ‘normal’ computer program does. There are no letters. It’s ‘vocabulary’ (for lack of a better term) is stored simply as relationships – words are related to other words with different affinities, essentially like the connections between neurons. Words are concepts- connected to other concepts. That’s it.
#1 and #2 result in a highly-optimized representation of human language. It wouldn’t surprise me that our evolution as a species has a resulted in a similar architecture.
My grandmother, of blessed memory, had a stroke in her early twenties (I understand it was due to a badly typed blood transfusion). It left her paralyzed on her right side, and took her years to relearn how to speak and write – skills that she never recovered completely.
A Thought Exercise
Here’s a thought exercise… one that I’d like to get around to in the coming weeks or months (unless a researcher wants to take the baton from me – I’ll gladly hand it off.)
What would happen if we took ChatGPT’s fully trained neural network of 175 billion connection weights, and just zero’d out a whole bunch of those values? 1% of them? 10%? 50%?
I’m curious if ChatGPT would behave like a human who just had a stroke – struggling to find certain words, producing gibberish in certain scenarios that it thinks is correct, etc.
What other symptoms might it exhibit?
A More Nuanced Approach
Now just zeroing out a bunch of connections is a naïve approach. With ChatGPT, every neuron in an 85,000-wide layer of the ANN is connected to every neuron in the next layer. The human brain doesn’t work that way. Most neurons have up to 100 connections to other neurons. The human brain also has a lot more ‘physical locality’ to consider – neurons usually don’t connect directly to neurons that are physically far away. ChatGPT achieves something similar to this with its layers, but I suspect that locality isn’t nearly enough.
You’d want to run a simple traversal algorithm that takes a starting point in the network, and follows neural connections around (using parameter weights as a proxy for connectivity and proximity) to zero-out connections and simulate the ‘stroke’. Once in a while you’d want to take a random ‘jump’ to a ‘nearby’ neuron that isn’t directly connected.
What do you think would happen?
Might the results of this experiment yield clues to the nature of strokes in humans? The nature of how the human brain stores language?
Could similar experiments on more complex ANNs in the future reveal clues to the nature of other brain conditions – tumors, aneurysms, depression, OCD, etc.?
Midjourney Prompt: An explosion inside an android’s head. fractal brain. –v 4
This article was written by Level Ex CEO Sam Glassenberg and originally featured on LinkedIn
Read original article.
Follow Sam on LinkedIn to learn more about advancing medicine through videogame technology and design