Скачать презентацию TNI Computational Neuroscience Instructors Peter Latham Maneesh Sahani Скачать презентацию TNI Computational Neuroscience Instructors Peter Latham Maneesh Sahani

5627fa97e4ffd8587108851202fabd96.ppt

  • Количество слайдов: 86

TNI: Computational Neuroscience Instructors: Peter Latham Maneesh Sahani Peter Dayan TAs: Website: Arthur Guez, TNI: Computational Neuroscience Instructors: Peter Latham Maneesh Sahani Peter Dayan TAs: Website: Arthur Guez, aguez@gatsby. ucl. ac. uk Marius Pachitariu, marius@gatsby. ucl. ac. uk http: //www. gatsby. ucl. ac. uk/~aguez/tn 1/ Lectures: Review: Tuesday/Friday, 11: 00 -1: 00. Tuesday, starting at 4: 30. Homework: Assigned Friday, due Friday (1 week later). first homework: assigned Oct. 7, due Oct. 14.

What is computational neuroscience? Our goal: figure out how the brain works. What is computational neuroscience? Our goal: figure out how the brain works.

There about 10 billion cubes of this size in your brain! 10 microns There about 10 billion cubes of this size in your brain! 10 microns

How do we go about making sense of this mess? David Marr (1945 -1980) How do we go about making sense of this mess? David Marr (1945 -1980) proposed three levels of analysis: 1. the problem (computational level) 2. the strategy (algorithmic level) 3. how it’s actually done by networks of neurons (implementational level)

Example #1: memory. the problem: recall events, typically based on partial information. Example #1: memory. the problem: recall events, typically based on partial information.

Example #1: memory. the problem: recall events, typically based on partial information. associative or Example #1: memory. the problem: recall events, typically based on partial information. associative or content-addressable memory. an algorithm: dynamical systems with fixed points. r 3 r 2 r 1 activity space

Example #1: memory. the problem: recall events, typically based on partial information. associative or Example #1: memory. the problem: recall events, typically based on partial information. associative or content-addressable memory. an algorithm: dynamical systems with fixed points. neural implementation: Hopfield networks. xi = sign(∑j Jij xj)

Example #2: vision. the problem (Marr): 2 -D image on retina → 3 -D Example #2: vision. the problem (Marr): 2 -D image on retina → 3 -D reconstruction of a visual scene.

Example #2: vision. the problem (modern version): 2 -D image on retina → recover Example #2: vision. the problem (modern version): 2 -D image on retina → recover the latent variables. house sun tree bad artist

Example #2: vision. the problem (modern version): 2 -D image on retina → recover Example #2: vision. the problem (modern version): 2 -D image on retina → recover the latent variables. house sun tree bad artist cloud

Example #2: vision. the problem (modern version): 2 -D image on retina → reconstruction Example #2: vision. the problem (modern version): 2 -D image on retina → reconstruction of latent variables. an algorithm: graphical models. x 1 r 1 x 2 r 2 x 3 r 3 latent variables r 4 low level representation

Example #2: vision. the problem (modern version): 2 -D image on retina → reconstruction Example #2: vision. the problem (modern version): 2 -D image on retina → reconstruction of latent variables. an algorithm: graphical models. x 1 x 2 x 3 latent variables inference r 1 r 2 r 3 r 4 low level representation

Example #2: vision. the problem (modern version): 2 -D image on retina → reconstruction Example #2: vision. the problem (modern version): 2 -D image on retina → reconstruction of latent variables. an algorithm: graphical models. implementation in networks of neurons: no clue.

Comment #1: the problem: the algorithm: neural implementation: Comment #1: the problem: the algorithm: neural implementation:

Comment #1: the problem: the algorithm: neural implementation: easier harder often ignored!!! Comment #1: the problem: the algorithm: neural implementation: easier harder often ignored!!!

Comment #1: the problem: the algorithm: neural implementation: easier harder A common approach: Experimental Comment #1: the problem: the algorithm: neural implementation: easier harder A common approach: Experimental observation → model Usually very underconstrained!!!!

Comment #1: the problem: the algorithm: neural implementation: easier harder Example i: CPGs (central Comment #1: the problem: the algorithm: neural implementation: easier harder Example i: CPGs (central pattern generators) rate Too easy!!!

Comment #1: the problem: the algorithm: neural implementation: easier harder Example ii: single cell Comment #1: the problem: the algorithm: neural implementation: easier harder Example ii: single cell modeling C d. V/dt = -g. L(V – VL) – n 4(V – VK) … dn/dt = … … lots and lots of parameters … which ones should you use?

Comment #1: the problem: the algorithm: neural implementation: easier harder Example iii: network modeling Comment #1: the problem: the algorithm: neural implementation: easier harder Example iii: network modeling lots and lots of parameters × thousands

Comment #2: the problem: the algorithm: neural implementation: easier harder You need to know Comment #2: the problem: the algorithm: neural implementation: easier harder You need to know a lot of math!!!!! x 1 r 1 x 2 r 3 x 3 r 2 r 4 r 1 activity space

Comment #3: the problem: the algorithm: neural implementation: easier harder This is a good Comment #3: the problem: the algorithm: neural implementation: easier harder This is a good goal, but it’s hard to do in practice. Our actual bread and butter: 1. Explaining observations (mathematically) 2. Using sophisticated analysis to design simple experiments that test hypotheses.

Comment #3: Two experiments: - record, using loose patch, from a bunch of cells Comment #3: Two experiments: - record, using loose patch, from a bunch of cells in culture - block synaptic transmission - record again - found quantitative support for the balanced regime. J. Neurophys. , 83: 808 -827, 828 -835, 2000

Comment #3: Two experiments: - perform whole cell recordings in vivo - stimulate cells Comment #3: Two experiments: - perform whole cell recordings in vivo - stimulate cells with a current pulse every couple hundred ms - build current-triggered PSTH - showed that the brain is intrinsically very noisy, and is likely to be using a rate code. Nature, 466: 123 -127 (2010)

Comment #4: the problem: the algorithm: neural implementation: easier harder these are linked!!! some Comment #4: the problem: the algorithm: neural implementation: easier harder these are linked!!! some algorithms are easy to implement on a computer but hard in a brain, and vice-versa.

Comment #4: hard for a brain, easy for a computer: A-1 z=x+y ∫dx. . Comment #4: hard for a brain, easy for a computer: A-1 z=x+y ∫dx. . . easy for a brain, hard for a computer: associative memory

Comment #4: the problem: the algorithm: neural implementation: easier harder these are linked!!! some Comment #4: the problem: the algorithm: neural implementation: easier harder these are linked!!! some algorithms are easy to implement on a computer but hard in a brain, and vice-versa. we should be looking for the vice-versa ones. it can be hard to tell which is which.

Basic facts about the brain Basic facts about the brain

Your brain Your brain

Your cortex unfolded neocortex (cognition) 6 layers ~30 cm ~0. 5 cm subcortical structures Your cortex unfolded neocortex (cognition) 6 layers ~30 cm ~0. 5 cm subcortical structures (emotions, reward, homeostasis, much more)

Your cortex unfolded 1 cubic millimeter, ~3*10 -5 oz Your cortex unfolded 1 cubic millimeter, ~3*10 -5 oz

1 mm 3 of cortex: 50, 000 neurons 10000 connections/neuron (=> 500 million connections) 1 mm 3 of cortex: 50, 000 neurons 10000 connections/neuron (=> 500 million connections) 4 km of axons

1 mm 3 of cortex: 1 mm 2 of a CPU: 50, 000 neurons 1 mm 3 of cortex: 1 mm 2 of a CPU: 50, 000 neurons 10000 connections/neuron (=> 500 million connections) 4 km of axons 1 million transistors 2 connections/transistor (=> 2 million connections). 002 km of wire

1 mm 3 of cortex: 1 mm 2 of a CPU: 50, 000 neurons 1 mm 3 of cortex: 1 mm 2 of a CPU: 50, 000 neurons 10000 connections/neuron (=> 500 million connections) 4 km of axons 1 million transistors 2 connections/transistor (=> 2 million connections). 002 km of wire whole brain (2 kg): whole CPU: 1011 neurons 1015 connections 8 million km of axons 109 transistors 2*109 connections 2 km of wire

1 mm 3 of cortex: 1 mm 2 of a CPU: 50, 000 neurons 1 mm 3 of cortex: 1 mm 2 of a CPU: 50, 000 neurons 10000 connections/neuron (=> 500 million connections) 4 km of axons 1 million transistors 2 connections/transistor (=> 2 million connections). 002 km of wire whole brain (2 kg): whole CPU: 1011 neurons 1015 connections 8 million km of axons 109 transistors 2*109 connections 2 km of wire

dendrites (input) soma (spike generation) axon (output) voltage +20 m. V 1 ms time dendrites (input) soma (spike generation) axon (output) voltage +20 m. V 1 ms time -50 m. V 100 ms

synapse current flow synapse current flow

synapse current flow synapse current flow

voltage +20 m. V -50 m. V time 100 ms voltage +20 m. V -50 m. V time 100 ms

neuron i neuron j V on neuron i neuron j emits a spike: EPSP neuron i neuron j V on neuron i neuron j emits a spike: EPSP t 10 ms

neuron i neuron j V on neuron i neuron j emits a spike: IPSP neuron i neuron j V on neuron i neuron j emits a spike: IPSP t 10 ms

neuron i neuron j V on neuron i neuron j emits a spike: IPSP neuron i neuron j V on neuron i neuron j emits a spike: IPSP t 10 ms amplitude = wij

neuron i neuron j V on neuron i neuron j emits a spike: IPSP neuron i neuron j V on neuron i neuron j emits a spike: IPSP t 10 ms changes with learning amplitude = wij

wij current flow wij current flow

A bigger picture view of the brain A bigger picture view of the brain

x r latent variables peripheral spikes sensory processing ^ r cognition memory action selection x r latent variables peripheral spikes sensory processing ^ r cognition memory action selection ^ r' “direct” code for latent variables brain “direct” code for motor actions motor processing r' peripheral spikes x' motor actions

Who is walking behind the picket fence? Who is walking behind the picket fence?

r r

r r

r r

r r

r r

you are the cutest stick figure ever! r you are the cutest stick figure ever! r

you are the cutest stick figure ever! r you are the cutest stick figure ever! r

x r latent variables peripheral spikes sensory processing ^ r cognition memory action selection x r latent variables peripheral spikes sensory processing ^ r cognition memory action selection ^ r' “direct” code for latent variables brain “direct” code for motor actions motor processing r' peripheral spikes x' motor actions

x r latent variables peripheral spikes sensory processing ^ r cognition memory action selection x r latent variables peripheral spikes sensory processing ^ r cognition memory action selection ^ r' “direct” code for latent variables brain “direct” code for motor actions motor processing r' peripheral spikes x' motor actions

In some sense, action selection is the most important problem: if we don’t choose In some sense, action selection is the most important problem: if we don’t choose the right actions, we don’t reproduce, and all the neural coding and computation in the world isn’t going to help us.

Do I call him and risk rejection and humiliation, or do I play it Do I call him and risk rejection and humiliation, or do I play it safe, and stay home on Saturday night and eat oreos?

Do I call her and risk rejection and humiliation, or do I play it Do I call her and risk rejection and humiliation, or do I play it safe, and stay home on Saturday night and eat oreos?

x r latent variables peripheral spikes sensory processing ^ r cognition memory action selection x r latent variables peripheral spikes sensory processing ^ r cognition memory action selection ^ r' “direct” code for latent variables brain “direct” code for motor actions motor processing r' peripheral spikes x' motor actions

Problems: 1. How does the brain extract latent variables? 2. How does it manipulatent Problems: 1. How does the brain extract latent variables? 2. How does it manipulatent variables? 3. How does it learn to do both? Ask at two levels: 1. What are the algorithms? 2. How are they implemented in neural hardware?

d se ia the brain? What do we know b about ly h ig d se ia the brain? What do we know b about ly h ig H

a. Anatomy. We know a lot about what is where. But be careful about a. Anatomy. We know a lot about what is where. But be careful about labels: neurons in motor cortex sometimes respond to color. Connectivity. We know (more or less) which area is connected to which.

The van Essen diagram The van Essen diagram

a. Anatomy. We know a lot about what is where. But be careful about a. Anatomy. We know a lot about what is where. But be careful about labels: neurons in motor cortex sometimes respond to color. Connectivity. We know (more or less) which area is connected to which.

a. Anatomy. We know a lot about what is where. But be careful about a. Anatomy. We know a lot about what is where. But be careful about labels: neurons in motor cortex sometimes respond to color. Connectivity. We know (more or less) which area is connected to which. We don’t know the wiring diagram at the microscopic level. wij

a. Anatomy. We know a lot about what is where. But be careful about a. Anatomy. We know a lot about what is where. But be careful about labels: neurons in motor cortex sometimes respond to color. Connectivity. We know (more or less) which area is connected to which. We don’t know the wiring diagram at the microscopic level. But we might in a few decades! wij

b. Single neurons. We know very well how point neurons work (think Hodgkin Huxley). b. Single neurons. We know very well how point neurons work (think Hodgkin Huxley). Dendrites. Lots of potential for incredibly complex processing. My guess: all they do make neurons bigger and reduce wiring length (see the work of Mitya Chklovskii).

L m neurons L n neurons total wire length without dendrites: ~nm. L L m neurons L n neurons total wire length without dendrites: ~nm. L

total length = m. L L m neurons total length = n. L L total length = m. L L m neurons total length = n. L L n neurons total wire length without dendrites: ~nm. L total wire length with dendrites: ~(n+m)L

b. Single neurons. We know very well how point neurons work (think Hodgkin Huxley). b. Single neurons. We know very well how point neurons work (think Hodgkin Huxley). Dendrites. Lots of potential for incredibly complex processing. My guess: all they do is make neurons bigger and reduce wiring length (see the work of Mitya Chklovskii). How much I would bet that’s true: 20 p.

c. The neural code. My guess: once you get away from periphery, it’s mainly c. The neural code. My guess: once you get away from periphery, it’s mainly firing rate: an inhomogeneous Poisson process with a refractory period is a good model of spike trains. How much I would bet: £ 100. The role of correlations. Still unknown. My guess: don’t have one.

d. Recurrent networks of spiking neurons. This is a field that is advancing rapidly! d. Recurrent networks of spiking neurons. This is a field that is advancing rapidly! There were two absolutely seminal papers about a decade ago: van Vreeswijk and Sompolinsky (Science, 1996) van Vreeswijk and Sompolinsky (Neural Comp. , 1998) We now understand very well randomly connected networks (harder than you might think), and (I believe) we are on the verge of: i) understanding networks that have interesting computational properties. ii) computing the correlational structure in those networks.

e. Learning. We know a lot of facts (LTP, LTD, STDP). • it’s not e. Learning. We know a lot of facts (LTP, LTD, STDP). • it’s not clear which, if any, are relevant. • the relationship between learning rules and computation is essentially unknown. Theorists are starting to develop unsupervised learning algorithms, mainly ones that maximize mutual information. These are promising, but the link to the brain has not been fully established.

e. Learning. We know a lot of facts (LTP, LTD, STDP). • it’s not e. Learning. We know a lot of facts (LTP, LTD, STDP). • it’s not clear which, if any, are relevant. • the relationship between learning rules and computation is essentially unknown. Theorists are starting to develop unsupervised learning algorithms, mainly ones that maximize mutual information. These are promising, but the link to the brain has not been fully established.

What is unsupervised learning? Learning structure from data without any help from anybody. Example: What is unsupervised learning? Learning structure from data without any help from anybody. Example: most visual scenes are very unlikely to occur. 1000 × 1000 pixels => million dimensional space of possible pictures is much smaller, and forms a very complicated manifold: possible visual scenes

What is unsupervised learning? Learning structure from data without any help from anybody. Example: What is unsupervised learning? Learning structure from data without any help from anybody. Example: most visual scenes are very unlikely to occur. 1000 × 1000 pixels => million dimensional space of possible pictures is much smaller, and forms a very complicated manifold: visual scenes

What is unsupervised learning? Learning structure from data without any help from anybody. Example: What is unsupervised learning? Learning structure from data without any help from anybody. Example: most visual scenes are very unlikely to occur. 1000 × 1000 pixels => million dimensional space of possible pictures is much smaller, and forms a very complicated manifold: visual scenes

What is unsupervised learning? neuron 2 Learning from spikes: neurons 1 What is unsupervised learning? neuron 2 Learning from spikes: neurons 1

What is unsupervised learning? neuron 2 Learning from spikes: dog cat neurons 1 What is unsupervised learning? neuron 2 Learning from spikes: dog cat neurons 1

What is unsupervised learning? Learning structure from data without any help from anybody. Which What is unsupervised learning? Learning structure from data without any help from anybody. Which is real and which is a painting?

A word about learning (remember these numbers!!!): You have about 1015 synapses. If it A word about learning (remember these numbers!!!): You have about 1015 synapses. If it takes 1 bit of information to set a synapse, you need 1015 bits to set all of them. 30 years ≈ 109 seconds. To set 1/10 of your synapses in 30 years, you must absorb 100, 000 bits/second. Learning in the brain is almost completely unsupervised!!! stolen from Geoff Hinton

f. Where we know algorithms we know the neural implementation (sort of): vestibular system, f. Where we know algorithms we know the neural implementation (sort of): vestibular system, sound localization, echolocation, addition This is not a coincidence!!!! Remember David Marr: 1. the problem (computational level) 2. the strategy (algorithmic level) 3. how it’s actually done by networks of neurons (implementational level)

What we know: my score (1 -10). a. Anatomy. b. Single neurons. c. The What we know: my score (1 -10). a. Anatomy. b. Single neurons. c. The neural code. d. Recurrent networks of spiking neurons. e. Learning. The hard problems: 1. How does the brain extract latent variables? 2. How does it manipulatent variables? 3. How does it learn to do both? 5 6 6 3 2 1. 001 1. 002 1. 001

Outline: 1. 2. 3. 4. Basics: single neurons/axons/dendrites/synapses. Language of neurons: neural coding. Learning Outline: 1. 2. 3. 4. Basics: single neurons/axons/dendrites/synapses. Language of neurons: neural coding. Learning at the network and behavioral level. What we know about networks (very little). Latham Sahani Dayan Latham

Outline for this part of the course (biophysics): 1. 2. 3. 4. 5. What Outline for this part of the course (biophysics): 1. 2. 3. 4. 5. What makes a neuron spike. How current propagates in dendrites. How current propagates in axons. How synapses work. Lots and lots of math!!!