Professional Documents
Culture Documents
+
v
o
_
)} ( ) {( ) (
1 1
AZ X AZ X tr J
T T
E = E =
v v o
_ o
o
o
) ( 2 ) ( 2
) (
1 1
E E =
c
c
Z ZZ
J
T
_ o ) ) ((
1
I Z ZZ
T
=
1
) (
=
T T
ZZ XZ A
\
|
=
A c
c
[
A
=
) ) 1 ( ) ( ( ) (
1 1 1 1
b y W x W y + + = t t f t
f
2 1 2 2
) ( ) ( b y W y + = t t
Feedforward RMLP Eqs.
General form of RMLP
Sensitivity
Feedforward Linear Eq.
General form of Linear
Sensitivity
W
x
y
=
c
c
) (
) (
t
t
) ( ) ( t t Wx y =
Identify the neurons that affect the output the most.
Data Analysis : The Effect of Sensitive Neurons on Performance
0 20 40 60
-20
0
20
40
60
Hightest Sensitivity Neurons
0 20 40 60
-20
0
20
40
60
Middle Sensitivity Neurons
0 20 40 60
-20
0
20
40
60
Lowest Sensitivity Neurons
0 20 40 60 80
0
0.2
0.4
0.6
0.8
1
P
r
o
b
a
b
i
l
i
t
y
3D Error Radius (mm)
Movements (hits) of Test Trajectory
10 Highest Sensitivity
84 Intermediate Sensitivity
10 Lowest Sensitivity
All Neurons
0 20 40 60 80 100 120
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
S
e
n
s
i
t
i
v
i
t
y
Primate 1, Session 1
Neurons
93
19
29
5
4
84
7
26
45
104
Decay trend appears in all
animals and behavioral
paradigms
Cortical Contributions Belle Day 2
0 20 40
-20
0
20
40
60
Area 1
0 20 40
-20
0
20
40
60
Area 2
0 20 40
-20
0
20
40
60
Area 3
0 20 40
-20
0
20
40
60
Area 4
0 20 40
-20
0
20
40
60
Areas 12
0 20 40
-20
0
20
40
60
Areas 13
0 20 40
-20
0
20
40
60
Areas 14
0 20 40
-20
0
20
40
60
Areas 23
0 20 40
-20
0
20
40
60
Areas 24
0 20 40
-20
0
20
40
60
Areas 34
0 20 40
-20
0
20
40
60
Areas 123
0 20 40
-20
0
20
40
60
Areas 124
0 20 40
-20
0
20
40
60
Areas 134
0 20 40
-20
0
20
40
60
Areas 234
0 20 40
-20
0
20
40
60
Areas 1234
Area 1 PP
Area 2 M1
Area 3 PMd
Area 4 M1 (right)
Train 15 separate RMLPs with every combination of cortical input.
Is there enough information in spike
trains for modeling movement?
Analysis is based on the time embedded model
Correlation with desired is based on a linear filter output for
each neuron
Utilize a non-stationary tracking algorithm
Parameters are updated by LMS
Build a spatial filter
Adaptive in real time
Sparse structure based on regularization for enables
selection
Adapted by LMS
Adapted by on-line LAR
(Kim et. al., MLSP, 2004)
Architecture
x
1
(n)
z
-1
z
-1
E
y
1
(n)
w
11
w
1L
/
/
x
M
(n)
z
-1
z
-1
E y
M
(n)
w
M1
w
ML
/
/
E
y
2
(n)
c
1
c
M
) (
n d
c
2
Training Algorithms
Tap weights for every time lag is updated by LMS
Then, the spatial filter coefficients are obtained by on-line version of
least angle regression (LAR) (Efron et. al. 2004)
|
i
=0
r = y-X| = y
Find argmax
i
|x
i
T
r|
x
j
|
j
r = y-X| = y-x
j
|
j
Adjust |
j
s.t.
-k, |x
k
T
r|=|x
i
T
r|
.
.
.
x
1
x
k
y
x
j
|
j
r = y-(x
j
|
j
+ x
k
|
k
)
Adjust |
j
& |
k
s.t.
-q, |x
q
T
r|=|x
k
T
r|=|x
i
T
r|
|
k
) ( ) ( 2 ) ( ) 1 ( n x n e n w n w
ij ij ij
q + = +
Application to BMI Data Tracking
Performance
Application to BMI Data Neuronal
Subset Selection
Hand
Trajectory
(z)
Neuronal
Channel
Index
Early
Part
Late
Part
Generative Models for BMIs
Use partial information about the physiological system, normally
in the form of states.
They can be either applied to binned data or to spike trains
directly.
Here we will only cover the spike train implementations.
Difficulty of spike train Analysis:
Spike trains are point processes, i.e. all the information is contained
in the timing of events, not in the amplitude fo the signals!
Build an adaptive signal processing framework for
BMI decoding in the spike domain.
Features of Spike domain analysis
Binning window size is not a concern
Preserve the randomness of the neuron behavior.
Provide more understanding of neuron physiology (tuning) and
interactions at the cell assembly level
Infer kinematics online
Deal with nonstationary
More computation with millisecond time resolution
Goal
Recursive Bayesian Approach
) ,
~
(
~
t t
n X H Z
t
t
=
State
Time-series
model
cont. observ.
Prediction
) , (
~
1
1
=
t
t
t
t v X F X
Updating
t
Z
P(state|observation)
Recursive Bayesian approach
State space representation
First equation (system model) defines a first order Markov process.
Second equation (observation model) defines the likelihood of the
observations p(z
t
|x
t
) . The problem is completely defined by the
prior distribution p(x
0
).
Although the posterior distribution p(x
0:t
|u
1:t
,z
1:t
) constitutes the
complete solution, the filtering density p(x
t
|u
1:t
, z
1:t
) is normally
used for on-line problems.
The general solution methodology is to integrate over the unknown
variables (marginalization).
+ =
+ =
+
t t t t t
t t t
n x u h z
v x f x
) , (
) (
1
Recursive Bayesian approach
There are two stages to update the filtering density:
Prediction (Chapman Kolmogorov)
System model p(x
t
|x
t-1
) propagates into the future the posterior density
Update
Uses Bayes rule to update the filtering density. The following equations
are needed in the solution.
}
=
1 1 : 1 1 : 1 1 1 1 : 1 1 : 1
) , | ( ) | ( ) , | (
t t t t t t t t t
dx z u x p x x p z u x p
) , | (
) , | ( ) , | (
) , | (
1 : 1
1 : 1 1 : 1
: 1 : 1
=
t t t
t t t t t t
t t t
z u u p
z x x p u x z p
z u x p
1 1 1 1 1 1 1 1 1 1
) ( ) ( ) | ( ) , | ( ) | (
} }
= =
t t t t t t t t t t t t t
dv v p x v x dv x v p x v x p x x p
}
=
t t t t t t t t t
dn n p n x u h z u x z p ) ( ) ) , ( ( ) , | (
t t t t t t t t t t
dx u z x p u x z p u z z p
}
= ) , | ( ) , | ( ) , | (
1 : 1 1 : 1 1 : 1
State estimation framework for BMI decoding
Tuning function
Kinematics
state
Neural Tuning
function
Multi-spike trains
observation
x
k k-1
x
k
F
k-1
v
= ( )
,
k
x
k
z
k
H
k
n
=
) (
,
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
x 10
5
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
time
s p i k e
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
x 10
5
-1.5
-1
-0.5
0
0.5
1
1.5
time (ms)
v
e
l o
c
i t y
Decoding
Kinematic dynamic model
Key Idea: work with the probability of spike firing which is a
continuous random variable
Kalman filter for BMI decoding
Kinematic
State
Neuron tuning
function Firing rate
Continuous
Observation
P(state|observation)
Prediction
Updating
Gaussian
Linea
r
Linea
r
[Wu et al. 2006]
For Gaussian noises and linear prediction and observation models, there
is an analytic solution called the Kalman Filter.
Particle Filter for BMI decoding
Kinematic
State
Neuron tuning
function Firing rate
Continuous
Observation
P(state|observation)
Prediction
Updating
nonGaussian
Linea
r
Exponential
[Brockwell et al. 2004]
In general the integrals need to be approximated by sums using
Monte Carlo integration with a set of samples drawn from the
posterior distribution of the model parameters.
Step 2- Tuning Function Estimation
Neural firing Model
Assumption :
generation of the spikes depends only on the kinematic
vector we choose.
Linear
filter
nonlinear f
Poisson
model
velocity
spikes
) (
t
t
v k f =
) (
t t
Poisson spike =
Step 2- Linear Filter Estimation
Spike Triggered Average (STA)
Geometry interpretation
] [ ) ] [ (
|
1
v E I v v E k
spike v
T
+ = o
-30 -20 -10 0 10 20 30
-25
-20
-15
-10
-5
0
5
10
15
20
25
1st Principal Component
2
n
d
P
r
i
n
c
i
p
a
l
C
o
m
p
o
n
e
n
t
neuron 72: VpS PCA
Vp
VpS
1
st
Principal component
2
n
d
P
r
i
n
c
i
p
a
l
c
o
m
p
o
n
e
n
t
Step 2- Nonlinear f estimation
Step 2- Diversity of neural nonlinear properties
Ref: Paradoxical cold
[Hensel et al. 1959]
Step 2- Estimated firing probability and
generated spikes
Step 3: Sequential Estimation Algorithm for
Point Process Filtering
Consider the neuron as an inhomogenous Poisson point process
Observing N(t) spikes in an interval AT, the posterior of the spike
model is
The probability of observing an event in At is
And the one step prediction density (Chapman-Kolmogorov)
The posterior of the state vector, given an observation AN
} exp{ ) (
k k k
v k t + =
t
t t t t N t t N
t t t t
t A
= A +
=
A
)) ( ), ( ), ( | 1 ) ( ) ( Pr(
lim )) ( ), ( ), ( | (
0
H x
H x
) ) , | ( exp( ) ) , | ( ( ) , | ( t t t t N P
k k k
N
k k k k k k
k
A A = A
A
H x H x H x
) | (
) | ( ) , | (
) , | (
k k
k k k k k
k k k
N p
p N P
N p
H
H x H x
H x
A
A
= A
1 1 1 1 1
) , | ( ) , | ( ) | (
A =
}
k k k k k k k k k
d N p p p x H x H x x H x
Step 3: Sequential Estimation Algorithm for
Point Process Filtering
Monte Carlo Methods are used to estimate the integral. Let
represent a random measure on the posterior density, and represent
the proposed density by
The posterior density can then be approximated by
Generating samples from using the principle of Importance
sampling
By MLE we can find the maximum or use direct estimation with kernels
of mean and variance
) | (
: 1 : 0 k k
N q x
=
~ A
N
i
i
t t
i
t t t
x x k w N x p
1
: 0 : 0 : 1 : 0
) , ( ) | ( o
S
N
i
i
k
i
k
w
1 : 0
} , {
=
x
S
N
i
i
k
i
k
w
1 : 0
} , {
=
x
S
N
i
i
k
i
k
w
1 : 0
} , {
=
x
S
N
i
i
k
i
k
w
1 : 0
} , {
=
x
S
N
i
i
k
i
k
w
1 : 0
} , {
=
x
) , | (
) | ( ) | (
) | (
) | (
1
1
1
: 1 : 0
: 1 : 0
k
i
k
i
k
i
k
i
k
i
k k i
k
k
i
k
k
i
k i
k
N q
p N p
w
N q
N p
w
A
A
=
x x
x x x
x
x
=
A =
S
N
i
i
k
i
k
k
k
N p
1
~
) | ( x x x
) ) ( ) ( ( ) | (
~
1
~
T
k
i
k
N
i
k
i
k
i
k
k k
S
N p V x x x x x + A =
=
o
) | (
: 1 : 0 k k
N q x
Posterior density at a time index
-2.5 -2 -1.5 -1 -0.5 0 0.5
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
velocity
p
r
o
b
a
b
i
l
i
t
y
pdf at time index 45.092s
posterior density
desired velocity
velocity by seq. estimation (collapse)
velocity by seq. estimation (MLE)
velocity by adaptive filtering
Step 3: Causality concerns
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
x 10
5
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
time
s
p
i k
e
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
x 10
5
-1.5
-1
-0.5
0
0.5
1
1.5
time (ms)
v
e
l o
c
i t
y
=
=
1 , 0
2 ) ; (
)
) (
)) ( | (
( log )) ( | ( )) ( ( ) (
spike X
KX spike
spike p
lag KX spike p
lag KX spike p lag KX p lag I
lag
For 185 neurons, average delay is 220.108 ms
0 50 100 150 200 250 300 350 400 450 500
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
2
2.2
2.4
time delay (ms)
I
(
s
p
k
,
K
X
)
(
T
i
m
e
D
e
l
a
y
)
I(spk,KX) as function of time delay
neuron 80
neuron 72
neuron 99
neruon 108
neruon 77
Figure 3-14 Mutual information as function of time delay for 5 neurons.
Step 3: Information Estimated Delays
Step 4:
Monte Carlo sequential kinematics estimation
) (
i
i
t
t
X k f =
Kinematic
State
Neural Tuning
function
spike trains
Prediction
i
t
i
t
t
i
t v X F X
1
1
+ =
Updating
) | (
) (
1
i
t
j
t
i
t
i
t
N p w w A
) ( j
t
N A
NonGaussian
P(state|observation)
=
~ A
N
i
i
t t
i
t
j
t t
x x k w N x p
1
: 0 : 0
) (
: 1 : 0
) ( ) | (
=
~
N
i
i
k k
i
k k
k k W N p
1
: 1
) ( ) | ( x x x
Reconstruct the kinematics from neuron spike
trains
650 700 750 800
-30
-20
-10
0
10
t
Px
650 700 750 800
-40
-20
0
20
40
t
Py
650 700 750 800
-2
-1
0
1
t
Vx
650 700 750 800
-2
0
2
t
Vy
650 700 750 800
-0.1
0
0.1
0.2
0.3
t
Ax
650 700 750 800
-0.1
0
0.1
0.2
0.3
t
Ay
desired
cc
exp
=0.7002
cc
MLE
=0.69188
desired
cc
exp
=0.015071
cc
MLE
=0.040027
desired
cc
exp
=0.91319
cc
MLE
=0.91162
desired
cc
exp
=0.81539
cc
MLE
=0.8151
desired
cc
exp
=0.97445
cc
MLE
=0.95376
desired
cc
exp
=0.80243
cc
MLE
=0.67264
Table 3-2 Correlation Coefficients between the Desired Kinematics and the
Reconstructions
CC
Position Velocity Acceleration
x y x y x y
Expectation 0.8161 0.8730 0.7856 0.8133 0.5066 0.4851
MLE 0.7750 0.8512 0.7707 0.7901 0.4795 0.4775
Table 3-3 Correlation Coefficient Evaluated by the Sliding Window
CC
Position Velocity Acceleration
x y x y x y
Expectation
0.84010
0.0738
0.8945
0.0477
0.7944
0.0578
0.8142
0.0658
0.5256
0.0658
0.4460
0.1495
MLE
0.7984
0.0963
0.8721
0.0675
0.7805
0.0491
0.7918
0.0710
0.4950
0.0430
0.4471
0.1399
Results comparison
[Sanchez, 2004]
Conclusion
Our results and those from other laboratories show it is possible to
extract intent of movement for trajectories from multielectrode array
data.
The current results are very promising, but the setups have limited
difficulty, and the performance seems to have reached a ceiling at an
uncomfortable CC < 0.9
Recently, spike based methods are being developed in the hope of
improving performance. But difficulties in these models are many.
Experimental paradigms to move the field from the present level need
to address issues of:
Training (no desired response in paraplegic)
How to cope with coarse sampling of the neural population
How to include more neurophysiology knowledge in the design