DeepTensor AB and WASP Media & Language proudly joined forces to introduce *AURORA-M*: The First Open Source Multilingual Language Model Red-teamed according to the U.S. Executive Order.
We released 3 different models:
-Aurora-M base: Model pretrained on 377B tokens of multilingual data.
-Aurora-M instruct: Model instruction tuned on Slim-Orca dataset on top of the base model.
-Aurora-M Biden-Harris red teamed: Model finetuned on 58B tokens of instruction tuning data mixed with Biden Harris red teaming dataset.
[2] https://lnkd.in/dQWGFzKk
[3] https://lnkd.in/dDpyRNDh
[4] Paper: https://lnkd.in/dsXhS_aq