Hugging Face Releases SmolVLA Open Source AI Model For Robotics Workflows


Hugging Face on Tuesday released SmolVLA, an open source vision language action (VLA) artificial intelligence (AI) model. The large language model is aimed at robotics workflows and training-related tasks. The company claims that the AI model is small and efficient enough to run locally on a computer with a single consumer GPU, or a MacBook. The New York, US-based AI model repository also claimed that SmolVLA can outperform models that are much large than it. The AI model is currently available to download.

Hugging Face’s SmolVLA AI Model Can Run Locally on a MacBook

According to Hugging Face, advancements in robotics have been slow, despite the growth in the AI space. The company says that this is due to a lack of high-quality and diverse data, and large language models (LLMs) that are designed for robotics workflows.

VLAs have emerged as a solution to one of the problems, but most of the leading models from companies such as Google and Nvidia are proprietary and are trained on private datasets. As a result, the larger robotics research community, which relies on open-source data, faces major bottlenecks in reproducing or building on these AI models, the post highlighted.

These VLA models can capture images, videos, or direct camera feed, understand the real-world condition and then carry out a prompted task using robotics hardware.

Hugging Face says SmolVLA addresses both the pain points currently faced by the robotics research community — it is an open-source robotics-focused model which is trained on an open dataset from the LeRobot community. SmolVLA is a 450 million parameter AI model which can run on a desktop computer with a single compatible GPU, or even one of the newer MacBook devices.

Coming to the architecture, it is built on the company’s VLM models. It consists of a SigLip vision encoder and a language decoder (SmolLM2). The visual information is captured and extracted via the vision encoder, while natural language prompts are tokenised and fed into the decoder.

When dealing with movements or physical action (executing the task via a robotic hardware), sensorimotor signals are added to a single token. The decoder then combines all of this information into a single stream and processes it together. This enables the model in understanding the real-world data and task at hand contextually, and not as separate entities.

SmolVLA sends everything it has learned to another component called the action expert, which figures out what action to take. The action expert is a transformer-based architecture with 100 million parameters. It predicts a series of future moves for the robot (walking steps, arm movements, etc), also known as action chunks.

While it applies to a niche demographic, those working with robotics can download the open weights, datasets, and training recipes to either reproduce or build on the SmolVLA model. Additionally, robotics enthusiasts who have access to a robotic arm or similar hardware can also download these to run the model and try out real-time robotics workflows.



Source link

Releated Posts

Samsung Galaxy S25 FE Renders Leak Online, Suggesting Familiar Design With Thinner Bezels

Samsung’s Galaxy S25 FE is said to be in the works as a successor to the Galaxy S24…

ByAjay jiJun 20, 2025

Samsung Galaxy Z Flip 7 Leaked Renders Suggest Edge-to-Edge Cover Display

Samsung is gearing up to launch the Galaxy Z Fold 7 and Galaxy Z Flip 7 soon. Ahead…

ByAjay jiJun 20, 2025

YouTube Shorts to Bring Google’s Veo 3 Video Generation Model With Audio Support ‘This Summer’

YouTube will soon integrate Google’s latest artificial intelligence (AI)-powered video generation model, CEO Neal Mohan announced at the…

ByAjay jiJun 20, 2025

Samsung Galaxy Z Fold 7 Leaked Renders Hint at Design Changes; Storage Options Tipped

Samsung Galaxy Z Fold 7 is rumoured to go official on July 9 alongside the Galaxy Z Flip…

ByAjay jiJun 20, 2025

Leave a Reply

Your email address will not be published. Required fields are marked *

Exit mobile version