Image

Hugging Face Releases SmolVLA Open Source AI Model For Robotics Workflows

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now


Hugging Face on Tuesday released SmolVLA, an open source vision language action (VLA) artificial intelligence (AI) model. The large language model is aimed at robotics workflows and training-related tasks. The company claims that the AI model is small and efficient enough to run locally on a computer with a single consumer GPU, or a MacBook. The New York, US-based AI model repository also claimed that SmolVLA can outperform models that are much large than it. The AI model is currently available to download.

Hugging Face’s SmolVLA AI Model Can Run Locally on a MacBook

According to Hugging Face, advancements in robotics have been slow, despite the growth in the AI space. The company says that this is due to a lack of high-quality and diverse data, and large language models (LLMs) that are designed for robotics workflows.

VLAs have emerged as a solution to one of the problems, but most of the leading models from companies such as Google and Nvidia are proprietary and are trained on private datasets. As a result, the larger robotics research community, which relies on open-source data, faces major bottlenecks in reproducing or building on these AI models, the post highlighted.

These VLA models can capture images, videos, or direct camera feed, understand the real-world condition and then carry out a prompted task using robotics hardware.

Hugging Face says SmolVLA addresses both the pain points currently faced by the robotics research community — it is an open-source robotics-focused model which is trained on an open dataset from the LeRobot community. SmolVLA is a 450 million parameter AI model which can run on a desktop computer with a single compatible GPU, or even one of the newer MacBook devices.

Coming to the architecture, it is built on the company’s VLM models. It consists of a SigLip vision encoder and a language decoder (SmolLM2). The visual information is captured and extracted via the vision encoder, while natural language prompts are tokenised and fed into the decoder.

When dealing with movements or physical action (executing the task via a robotic hardware), sensorimotor signals are added to a single token. The decoder then combines all of this information into a single stream and processes it together. This enables the model in understanding the real-world data and task at hand contextually, and not as separate entities.

SmolVLA sends everything it has learned to another component called the action expert, which figures out what action to take. The action expert is a transformer-based architecture with 100 million parameters. It predicts a series of future moves for the robot (walking steps, arm movements, etc), also known as action chunks.

While it applies to a niche demographic, those working with robotics can download the open weights, datasets, and training recipes to either reproduce or build on the SmolVLA model. Additionally, robotics enthusiasts who have access to a robotic arm or similar hardware can also download these to run the model and try out real-time robotics workflows.



Source link

Releated Posts

Samsung Galaxy Buds Core Listed on Company Site; Design, Specifications Revealed

WhatsApp Group Join Now Telegram Group Join Now Instagram Group Join Now Samsung is expected to launch the…

ByByAjay jiJun 20, 2025

iPhone 18 Pro Series Tipped to Get Hole-Punch Selfie Camera, Hidden Face ID System

WhatsApp Group Join Now Telegram Group Join Now Instagram Group Join Now Apple’s Dynamic Island feature has been…

ByByAjay jiJun 20, 2025

Get three months of Audible for only $3

WhatsApp Group Join Now Telegram Group Join Now Instagram Group Join Now Prime Day deals are already surfacing…

ByByAjay jiJun 20, 2025

Cloudflare CEO says people aren’t checking AI chatbots’ source links

WhatsApp Group Join Now Telegram Group Join Now Instagram Group Join Now Companies that develop generative AI always…

ByByAjay jiJun 20, 2025

Leave a Reply

Your email address will not be published. Required fields are marked *

Scroll to Top