close
close

“Nobody knows what makes humans so much more efficient”: Small language models based on Homo Sapiens could help explain how we learn and improve the efficiency of AI – for better or for worse

“Nobody knows what makes humans so much more efficient”: Small language models based on Homo Sapiens could help explain how we learn and improve the efficiency of AI – for better or for worse
“Nobody knows what makes humans so much more efficient”: Small language models based on Homo Sapiens could help explain how we learn and improve the efficiency of AI – for better or for worse

Technology companies are shifting their focus from building the largest language models (LLMs) to developing smaller models (SLMs) that can keep up with or even outperform them.

Meta’s Llama 3 (400 billion parameters), OpenAI’s GPT-3.5 (175 billion parameters) and GPT-4 (estimated 1.8 trillion parameters) are known to be larger models, while Microsoft’s Phi-3 family has between 3.8 and 14 billion parameters and Apple Intelligence “only” has around 3 billion parameters.

Leave a Reply

Your email address will not be published. Required fields are marked *