The company’s new SmolVLM-256M model, requiring less than one gigabyte of GPU memory, surpasses the performance of its Idefics 80B model from just 17 months ago — a system 300 times larger.
The models, SmolVLM-256M and SmolVLM-500M, are designed to work well on “constrained devices” like laptops with less than around 1GB of RAM. The team says that they’re also ideal for ...
Hugging Face Inc. today open-sourced SmolVLM-256M, a new vision language model with the lowest parameter count in its category. The algorithm’s small footprint allows it to run on devices such ...
These AI systems, like Qwen 2.5 VL, Moondream, and SmolVLM, are reshaping industries by bridging the gap between visual and textual data. But with so many options, each boasting unique strengths ...
I wonder if the fandom GIFs I made in 2012 are still there… Read more Now, that’s what I call smol 🤏: A team at AI dev platform Hugging Face has released SmolVLM-256M and SmolVLM-500M ...
A team at AI dev platform Hugging Face has released what they’re claiming are the smallest AI models that can analyze images, short videos, and text. The models, SmolVLM-256M and SmolVLM-500M ...
Hugging Face has introduced two new models in its SmolVLM series, which it claims are the smallest Vision Language Models (VLMs) to date. The models, SmolVLM-256M and SmolVLM-500M, are designed to ...
Hugging Face introduced two new variants to its SmolVLM vision language models last week. The new artificial intelligence (AI) models are available in 256 million and 500 million parameter sizes ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results