Apple Empowers Open Research Community with Release of OpenELM Models
Tech giant Apple has made a significant move in the world of artificial intelligence by releasing several open-source large language models (LLMs) called OpenELM. These models are designed to run on-device, rather than through cloud servers, marking a shift towards greater privacy and efficiency.
There are a total of eight OpenELM models, with four pre-trained using the CoreNet library and four instruction-tuned models. Apple has implemented a layer-wise scaling strategy to improve the accuracy and efficiency of these models, providing code, training logs, multiple versions, and a complete framework for training and evaluation.
The release of OpenELM is aimed at empowering the open research community, allowing developers and companies to investigate risks and data biases. The models can be used as-is or modified according to specific needs, fostering innovation and collaboration within the AI community.
Moreover, Apple’s decision to openly share this information is not just about giving back to the community; it is also a strategic move to attract top talent in the field. By showcasing their commitment to open research, Apple hopes to recruit top engineers, scientists, and experts who share their values and vision for the future of AI technology.
Looking ahead, rumors suggest that iOS 18 may include new AI features, with speculation that Apple could potentially run large language models on-device for privacy purposes. This development further solidifies Apple’s dedication to prioritizing user privacy and data security in the age of AI technology.
“Prone to fits of apathy. Devoted music geek. Troublemaker. Typical analyst. Alcohol practitioner. Food junkie. Passionate tv fan. Web expert.”