بدائل البحث:
algorithm python » algorithm within (توسيع البحث), algorithms within (توسيع البحث), algorithm both (توسيع البحث)
python function » protein function (توسيع البحث)
algorithm python » algorithm within (توسيع البحث), algorithms within (توسيع البحث), algorithm both (توسيع البحث)
python function » protein function (توسيع البحث)
-
1
S1 File -
منشور في 2024"…In this study, we developed a computerized algorithm using the python package (pdfplumber) and validated against clinicians’ interpretation. …"
-
2
S1 Dataset -
منشور في 2024"…In this study, we developed a computerized algorithm using the python package (pdfplumber) and validated against clinicians’ interpretation. …"
-
3
-
4
-
5
Algoritmo de clasificación de expresiones de odio por tipos en español (Algorithm for classifying hate expressions by type in Spanish)
منشور في 2024"…</p><h2>Model Architecture</h2><p dir="ltr">The model is based on <code>pysentimiento/robertuito-base-uncased</code> with the following modifications:</p><ul><li>A dense classification layer was added over the base model</li><li>Uses input IDs and attention masks as inputs</li><li>Generates a multi-class classification with 5 hate categories</li></ul><h2>Dataset</h2><p dir="ltr"><b>HATEMEDIA Dataset</b>: Custom hate speech dataset with categorization by type:</p><ul><li><b>Labels</b>: 5 hate type categories (0-4)</li><li><b>Preprocessing</b>:</li><li>Null values removed from text and labels</li><li>Reindexing and relabeling (original labels are adjusted by subtracting 1)</li><li>Exclusion of category 2 during training</li><li>Conversion of category 5 to category 2</li></ul><h2>Training Process</h2><h3>Configuration</h3><ul><li><b>Batch size</b>: 128</li><li><b>Epoches</b>: 5</li><li><b>Learning rate</b>: 2e-5 with 10% warmup steps</li><li><b>Early stopping</b> with patience=2</li><li><b>Class weights</b>: Balanced to handle class imbalance</li></ul><h3>Custom Metrics</h3><ul><li>Recall for specific classes (focus on class 2)</li><li>Precision for specific classes (focus on class 3)</li><li>F1-score (weighted)</li><li>AUC-PR</li><li>Recall at precision=0.6 (class 3)</li><li>Precision at recall=0.6 (class 2)</li></ul><h2>Evaluation Metrics</h2><p dir="ltr">The model is evaluated using:</p><ul><li>Macro recall, precision, and F1-score</li><li>One-vs-Rest AUC</li><li>Accuracy</li><li>Per-class metrics</li><li>Confusion matrix</li><li>Full classification report</li></ul><h2>Technical Features</h2><h3>Data Preprocessing</h3><ul><li><b>Tokenization</b>: Maximum length of 128 tokens (truncation and padding)</li><li><b>Encoding of labels</b>: One-hot encoding for multi-class classification</li><li><b>Data split</b>: 80% training, 10% validation, 10% testing</li></ul><h3>Optimization</h3><ul><li><b>Optimizer</b>: Adam with linear warmup scheduling</li><li><b>Loss function</b>: Categorical Crossentropy (from_logits=True)</li><li><b>Imbalance handling</b>: Class weights computed automatically</li></ul><h2>Requirements</h2><p dir="ltr">The following Python packages are required:</p><ul><li>TensorFlow</li><li>Transformers</li><li>scikit-learn</li><li>pandas</li><li>datasets</li><li>matplotlib</li><li>seaborn</li><li>numpy</li></ul><h2>Usage</h2><ol><li><b>Data format</b>:</li></ol><ul><li>CSV file or Pandas DataFrame</li><li>Required column name: <code>text</code> (string type)</li><li>Required column name: Data type label (integer type, 0-4) - optional for evaluation</li></ul><ol><li><b>Text preprocessing</b>:</li></ol><ul><li>Automatic tokenization with a maximum length of 128 tokens</li><li>Long texts will be automatically truncated</li><li>Handling of special characters, URLs, and emojis included</li></ul><ol><li><b>Label encoding</b>:</li></ol><ul><li>The model classifies hate speech into 5 categories (0-4)</li><li><code>0</code>: Political hatred: Expressions directed against individuals or groups based on political orientation.…"
-
6
Multidomain, Automated Photopatterning of DNA-functionalized Hydrogels (MAPDH).
منشور في 2024"…<b>B)</b> Pseudocode for MAPDH in Python. The algorithm takes as input the vials that will be flowed through the patterning chamber. …"
-
7
<b>AI for imaging plant stress in invasive species </b>(dataset from the article https://doi.org/10.1093/aob/mcaf043)
منشور في 2025"…</li><li>The dataframe of extracted colour features from all leaf images and lab variables (ecophysiological predictors and variables to be predicted)</li><li>Set of scripts used for image pre-processing, features extraction, data analytsis, visualization and Machine learning algorithms training, using ImageJ, R and Python.…"
-
8
Polygon vector map distortion for increasing the readability of one-to-many flow maps: data and codes
منشور في 2023"…This directory contains: </p> <p>- README file,</p> <p>- server.py: can be run from the command line to create a local server (`python3 server.py or python server.py`). …"
-
9
DataSheet1_Development of a Multilayer Deep Neural Network Model for Predicting Hourly River Water Temperature From Meteorological Data.docx
منشور في 2021"…We trained the LR and DNN algorithms on Google’s TensorFlow model using Keras artificial neural network library on Python. …"
-
10
An Ecological Benchmark of Photo Editing Software: A Comparative Analysis of Local vs. Cloud Workflows
منشور في 2025"…Performance Profiling Algorithms Energy Measurement Methodology # Pseudo-algorithmic representation of measurement protocol def capture_energy_metrics(workflow_type: WorkflowEnum, asset_vector: List[PhotoAsset]) -> EnergyProfile: baseline_power = sample_idle_power_draw(duration=30) with PowerMonitoringContext() as pmc: start_timestamp = rdtsc() # Read time-stamp counter if workflow_type == WorkflowEnum.LOCAL: result = execute_local_pipeline(asset_vector) elif workflow_type == WorkflowEnum.CLOUD: result = execute_cloud_pipeline(asset_vector) end_timestamp = rdtsc() energy_profile = EnergyProfile( duration=cycles_to_seconds(end_timestamp - start_timestamp), peak_power=pmc.get_peak_consumption(), average_power=pmc.get_mean_consumption(), total_energy=integrate_power_curve(pmc.get_power_trace()) ) return energy_profile Statistical Analysis Framework Our analytical pipeline employs advanced statistical methodologies including: Variance Decomposition: ANOVA with nested factors for hardware configuration effects Regression Analysis: Generalized Linear Models (GLM) with log-link functions for energy modeling Temporal Analysis: Fourier transform-based frequency domain analysis of power consumption patterns Cluster Analysis: K-means clustering with Euclidean distance metrics for workflow classification Data Validation and Quality Assurance Measurement Uncertainty Quantification All energy measurements incorporate systematic and random error propagation analysis: Instrument Precision: ±0.1W for CPU power, ±0.5W for GPU power Temporal Resolution: 1ms sampling with Nyquist frequency considerations Calibration Protocol: NIST-traceable power standards with periodic recalibration Environmental Controls: Temperature-compensated measurements in climate-controlled facility Outlier Detection Algorithms Statistical outliers are identified using the Interquartile Range (IQR) method with Tukey's fence criteria (Q₁ - 1.5×IQR, Q₃ + 1.5×IQR). …"