Search alternatives:
largest decrease » largest decreases (Expand Search), larger decrease (Expand Search), marked decrease (Expand Search)
rate increased » greater increase (Expand Search)
c largest » _ largest (Expand Search)
c larger » _ larger (Expand Search), _ large (Expand Search), a large (Expand Search)
c large » _ large (Expand Search), a large (Expand Search), i large (Expand Search)
c rate » _ rate (Expand Search)
largest decrease » largest decreases (Expand Search), larger decrease (Expand Search), marked decrease (Expand Search)
rate increased » greater increase (Expand Search)
c largest » _ largest (Expand Search)
c larger » _ larger (Expand Search), _ large (Expand Search), a large (Expand Search)
c large » _ large (Expand Search), a large (Expand Search), i large (Expand Search)
c rate » _ rate (Expand Search)
-
101
-
102
-
103
-
104
-
105
-
106
-
107
-
108
Biases in larger populations.
Published 2025“…<p>(<b>A</b>) Maximum absolute bias vs the number of neurons in the population for the Bayesian decoder. Bias decreases with increasing neurons in the population. …”
-
109
-
110
-
111
-
112
-
113
-
114
-
115
Slow rate adaptation increases population efficiency.
Published 2019“…(B) Average fast and slow time constants of DRA in the LSO from double-exponential fitting: median tau(first) = 0.22 s, IQR: 1.465 s; median tau (second) = 2.2 s, IQR: 7.7 s. (C and D) Inclusion of a second time constant was superior to single time constant fitting of rate adaptation: the rmses of fits decreased (C; <i>P</i> = 0.001, paired Student <i>t</i> test, <i>n</i> = 13 neurons) and adjusted R<sup>2</sup>-values increased (D; <i>P</i> = 0.001, paired Student <i>t</i> test, <i>n</i> = 13 neurons). …”
-
116
-
117
-
118
-
119
In superadditive networks, more enhancers decrease noise and fidelity.
Published 2023“…<p>(A) Superadditivity is implemented in our model by linearly increasing <i>k</i><sub>on</sub> rates and linearly decreasing <i>k</i><sub>off</sub> rates. …”
-
120