Latest version: v0.1.post1
The information on this page was curated by experts in our Cybersecurity Intelligence Team.
A nimble and innovative implementation of the Direct Preference Optimization (DPO) algorithm with Causal Transformer and LSTM model for time series data, inspired by the paper of DPO in fine-tuning unsupervised Language Models
No known vulnerabilities found