Search alternatives:
algorithm python » algorithm within (Expand Search), algorithms within (Expand Search), algorithm both (Expand Search)
python function » protein function (Expand Search)
algorithm f1 » algorithm _ (Expand Search), algorithm b (Expand Search), algorithm i (Expand Search)
algorithm a » algorithm _ (Expand Search), algorithm b (Expand Search), algorithms _ (Expand Search)
f1 function » f10 function (Expand Search), 1 function (Expand Search), fc function (Expand Search)
a function » _ function (Expand Search)
algorithm python » algorithm within (Expand Search), algorithms within (Expand Search), algorithm both (Expand Search)
python function » protein function (Expand Search)
algorithm f1 » algorithm _ (Expand Search), algorithm b (Expand Search), algorithm i (Expand Search)
algorithm a » algorithm _ (Expand Search), algorithm b (Expand Search), algorithms _ (Expand Search)
f1 function » f10 function (Expand Search), 1 function (Expand Search), fc function (Expand Search)
a function » _ function (Expand Search)
-
61
Multimodal reference functions.
Published 2025“…Utilizing the diabetes dataset from 130 U.S. hospitals, the LGWO-BP algorithm achieved a precision rate of 0.97, a sensitivity of 1.00, a correct classification rate of 0.99, a harmonic mean of precision and recall (F1-score) of 0.98, and an area under the ROC curve (AUC) of 1.00. …”
-
62
Practical rules for summing the series of the Tweedie probability density function with high-precision arithmetic
Published 2019“…These implementations need to utilize high-precision arithmetic, and are programmed in the Python programming language. A thorough comparison with existing R functions allows the identification of cases when the latter fail, and provide further guidance to their use.…”
-
63
-
64
-
65
-
66
-
67
-
68
-
69
-
70
The convergence curves of the test functions.
Published 2025“…Utilizing the diabetes dataset from 130 U.S. hospitals, the LGWO-BP algorithm achieved a precision rate of 0.97, a sensitivity of 1.00, a correct classification rate of 0.99, a harmonic mean of precision and recall (F1-score) of 0.98, and an area under the ROC curve (AUC) of 1.00. …”
-
71
Single-peaked reference functions.
Published 2025“…Utilizing the diabetes dataset from 130 U.S. hospitals, the LGWO-BP algorithm achieved a precision rate of 0.97, a sensitivity of 1.00, a correct classification rate of 0.99, a harmonic mean of precision and recall (F1-score) of 0.98, and an area under the ROC curve (AUC) of 1.00. …”
-
72
-
73
-
74
Test results of multimodal benchmark functions.
Published 2025“…Utilizing the diabetes dataset from 130 U.S. hospitals, the LGWO-BP algorithm achieved a precision rate of 0.97, a sensitivity of 1.00, a correct classification rate of 0.99, a harmonic mean of precision and recall (F1-score) of 0.98, and an area under the ROC curve (AUC) of 1.00. …”
-
75
Fixed-dimensional multimodal reference functions.
Published 2025“…Utilizing the diabetes dataset from 130 U.S. hospitals, the LGWO-BP algorithm achieved a precision rate of 0.97, a sensitivity of 1.00, a correct classification rate of 0.99, a harmonic mean of precision and recall (F1-score) of 0.98, and an area under the ROC curve (AUC) of 1.00. …”
-
76
Test results of multimodal benchmark functions.
Published 2025“…Utilizing the diabetes dataset from 130 U.S. hospitals, the LGWO-BP algorithm achieved a precision rate of 0.97, a sensitivity of 1.00, a correct classification rate of 0.99, a harmonic mean of precision and recall (F1-score) of 0.98, and an area under the ROC curve (AUC) of 1.00. …”
-
77
-
78
-
79
-
80