Search alternatives:
greater decrease » greatest decrease (Expand Search), greater increase (Expand Search), greater disease (Expand Search)
marked decrease » marked increase (Expand Search)
we decrease » _ decrease (Expand Search), a decrease (Expand Search), nn decrease (Expand Search)
greater decrease » greatest decrease (Expand Search), greater increase (Expand Search), greater disease (Expand Search)
marked decrease » marked increase (Expand Search)
we decrease » _ decrease (Expand Search), a decrease (Expand Search), nn decrease (Expand Search)
-
1
-
2
-
3
-
4
-
5
Net work vs phase of activation and lattice spacing. Top) We simulated work loops in the half sarcomere model at phases of activation of 0 to 0.95 in 0.05 increments, as well as over lattice spacings from 12 to 18 nm and plotted the new work for each condition. Bottom) We then simulated work loops over the same range, but with the stiffness of either the linear or torsional spring comprising the crossbridge head increased or decreased by 50% separately....
Published 2025“…Bottom) We then simulated work loops over the same range, but with the stiffness of either the linear or torsional spring comprising the crossbridge head increased or decreased by 50% separately. …”
-
6
-
7
-
8
-
9
A novel RNN architecture to improve the precision of ship trajectory predictions
Published 2025“…To solve these challenges, Recurrent Neural Network (RNN) models have been applied to STP to allow scalability for large data sets and to capture larger regions or anomalous vessels behavior. This research proposes a new RNN architecture that decreases the prediction error up to 50% for cargo vessels when compared to the OU model. …”
-
10
Fair Coins Tend to Land on the Same Side They Started: Evidence from 350,757 Flips
Published 2025“…Additional analyses revealed that the within-people same-side bias decreased as more coins were flipped, an effect that is consistent with the possibility that practice makes people flip coins in a less wobbly fashion. …”
-
11
-
12
S1 File -
Published 2025“…Furthermore underlined by the considerable decrease in model size without appreciable performance loss is the lower computational resources needed for training and deployment, hence facilitating greater applicability. …”
-
13
Confusion matrix for ClinicalBERT model.
Published 2025“…Furthermore underlined by the considerable decrease in model size without appreciable performance loss is the lower computational resources needed for training and deployment, hence facilitating greater applicability. …”
-
14
Confusion matrix for LastBERT model.
Published 2025“…Furthermore underlined by the considerable decrease in model size without appreciable performance loss is the lower computational resources needed for training and deployment, hence facilitating greater applicability. …”
-
15
Student model architecture.
Published 2025“…Furthermore underlined by the considerable decrease in model size without appreciable performance loss is the lower computational resources needed for training and deployment, hence facilitating greater applicability. …”
-
16
Configuration of the LastBERT model.
Published 2025“…Furthermore underlined by the considerable decrease in model size without appreciable performance loss is the lower computational resources needed for training and deployment, hence facilitating greater applicability. …”
-
17
Confusion matrix for DistilBERT model.
Published 2025“…Furthermore underlined by the considerable decrease in model size without appreciable performance loss is the lower computational resources needed for training and deployment, hence facilitating greater applicability. …”
-
18
ROC curve for LastBERT model.
Published 2025“…Furthermore underlined by the considerable decrease in model size without appreciable performance loss is the lower computational resources needed for training and deployment, hence facilitating greater applicability. …”
-
19
Sample Posts from the ADHD dataset.
Published 2025“…Furthermore underlined by the considerable decrease in model size without appreciable performance loss is the lower computational resources needed for training and deployment, hence facilitating greater applicability. …”
-
20
Top-level overview for ADHD classification study.
Published 2025“…Furthermore underlined by the considerable decrease in model size without appreciable performance loss is the lower computational resources needed for training and deployment, hence facilitating greater applicability. …”