Formulir Kontak

Nama

Email *

Pesan *

Cari Blog Ini

7b To 70b Parameters Trained On 2 Trillion Tokens

Meta Unveils Llama 2, a Family of High-Performance Language Models

7B to 70B Parameters, Trained on 2 Trillion Tokens

Meta has announced the release of Llama 2, a family of large language models (LLMs) trained on 2 trillion tokens. The three currently available model sizes are 7B, 13B, and 70B parameters, offering a range of capabilities for natural language processing tasks.

Improved Context Length and Data Scale

Llama 2 models have double the context length of Llama 1, allowing them to capture more information in each interaction and generate more coherent text. Additionally, they are trained on 40% more data, resulting in improved accuracy and comprehensive language understanding.


Komentar