Model Card: Cybersecurity Text Classifier (ModernBERT-base)
"RedSage: A Cybersecurity Generalist LLM" (ICLR 2026)
Authors: Naufal Suryanto1*, Muzammal Naseer1, Pengfei Li1, Syed Talal Wasim2, Jinhui Yi2, Juergen Gall2, Paolo Ceravolo3, Ernesto Damiani3
1Khalifa University, 2University of Bonn, 3University of Milan
*Project Lead
Model Details
- Model Type: Binary text classification model developed for domain-specific content filtering.
- Architecture: Based on ModernBERT-base, a bidirectional transformer encoder optimized for efficiency and long-context performance.
- Domain: Cybersecurity vs. Non-Cybersecurity.
- License: Released as part of the open-source RedSage project resources.
Intended Use
- Primary Use Case: Identifying cybersecurity-relevant documents within large-scale, unstructured web corpora such as FineWeb.
- Application: Filtering approximately 17.2 trillion tokens from Common Crawl subsets (2013–2024) to curate the 11.7B-token CyberFineWeb corpus.
- Intended Users: Researchers and developers focused on domain continual pretraining for cybersecurity LLMs.
Training Data
- Source Dataset: Cybersecurity Topic Classification dataset.
- Data Origin: Labeled samples collected from Reddit, StackExchange, and arXiv, alongside web articles.
- Dataset Size:
- Pre-processing: 9.27M training samples and 459K validation samples.
- Post-filtering: Reduced to 4.62M training samples and 2.46K validation samples after removing very short texts to minimize ambiguity.
- Labeling Method: Derived from forum categories, tags, and keyword metadata rather than LLM-generated annotations.
Training Procedure
- Optimizer: Adam optimizer.
- Learning Rate: 2e-5.
- Schedule: 10% warmup ratio over 2 training epochs.
- Hardware: Implementation utilized the ModernBERT-base encoder as the foundation for the binary head.
Evaluation Results
The model was evaluated on a validation set of 2,460 samples derived from web articles, achieving the following metrics:
| Metric | Score |
|---|---|
| Accuracy | 97.3% |
| Precision | 92.8% |
| Recall | 90.2% |
| F1 Score | 91.4% |
Limitations & Risks
- Context Sensitivity: While highly accurate, the model was specifically filtered to exclude very short texts to avoid context ambiguity.
- Temporal Bias: The model identifies cybersecurity content based on trends observed in web data up to late 2024; emerging threats post-2024 may not be represented.
- Dual-Use Concerns: The classifier is designed to identify offensive security technical content, which carries an inherent risk of misuse if applied outside of defensive or educational research.
Citation
@inproceedings{suryanto2026redsage,
title={RedSage: A Cybersecurity Generalist {LLM}},
author={Naufal Suryanto and Muzammal Naseer and Pengfei Li and Syed Talal Wasim and Jinhui Yi and Juergen Gall and Paolo Ceravolo and Ernesto Damiani},
booktitle={The Fourteenth International Conference on Learning Representations},
year={2026},
url={https://openreview.net/forum?id=W4FAenIrQ2}
}
- Downloads last month
- 13
Model tree for RISys-Lab/CyberSec-Text-Classification-ModernBert-Base
Base model
answerdotai/ModernBERT-base