Stability.AI has released an experimental model, Stable LM 3B, optimized for portable devices. It is the latest addition to their line of open-source language models.
Stable LM 3B has 3 billion parameters, a compact size designed for portability on digital devices such as laptops and handhelds. Despite its smaller size, it delivers competitive performance, outperforming previous 3B-parameter language models and even rivaling some 7B-parameter open-source models, according to Stability AI.
Driven by efficiency, Stable LM 3B requires fewer resources and has lower operating costs, making it accessible to most users. In particular, it consumes less power, promoting environmental sustainability.
The model is designed for a wide range of applications, including the development of technologies with conversational capabilities. Stable LM 3B features improved text generation at a fast execution speed and can be fine-tuned for various applications, such as programming assistance.
The development of Stable LM 3B broadens the range of applications that are viable on the edge or on home PCs. This means that individuals and companies can now develop cutting-edge technologies with strong conversational capabilities – like creative writing assistance – while keeping costs low and performance high.
However, Stability AI notes that Stable LM 3B is a base model that requires customization for specific applications and safe performance. Stability AI also says that it’s still an “experimental version.”
The company encourages the community to try the model, which is available for download on the Hugging Face platform. The model is released under the Open Source CC-By-SA 4.0 license.
Stability AI, known for its image AI Stable Diffusion, is also releasing open-source language models under the name “StableLM”. The first release, StableLM-Alpha in April, included models with 3 and 7 billion parameters, with plans to expand to 175 billion parameters. The models are available for commercial use under the Creative Commons CC BY-SA-4.0 license. They are trained on an experimental version of EleutherAI’s “The Pile” dataset.