In October 2023, we released LLaVA-1.5 with a simple and efficient design along with great performance on a benchmark suite of 12 datasets. It has since served as the foundation of many comprehensive studies of data, model, and capabilities of large multimodal models (LMM), and has enabled various new applications.

Today, we are thrilled to present LLaVA-1.6, with improved reasoning, OCR, and world knowledge. LLaVA-1.6 even exceeds Gemini Pro on several benchmarks.

Compared with LLaVA-1.5, LLaVA-1.6 has several improvements:

  1. Increasing the input image resolution to 4x more pixels. This allows it to grasp more visual details. It supports three aspect ratios, up to 672x672, 336x1344, 1344x336 resolution.
  2. Better visual reasoning and OCR capability with an improved visual instruction tuning data mixture.
  3. Better visual conversation for more scenarios, covering different applications. Better world knowledge and logical reasoning.
  4. Efficient deployment and inference with SGLang.

Along with performance improvements, LLaVA-1.6 maintains the minimalist design and data efficiency of LLaVA-1.5. It re-uses the pretrained connector of LLaVA-1.5, and still uses less than 1M visual instruction tuning samples. The largest 34B variant finishes training in ~1 day with 32 A100s. Code, data, model will be made publicly available.

Open-Source Release

We open-source the LLaVA-1.6 to facilitate future development of LMM in the community. Code, data, model will be made publicly available.

Results

Open-Source Proprietary
Data (PT)Data (IT)ModelMMMU (val)Math-VistaMMB-ENGMMB-CNMM-VetLLaVA-WildSEED-IMG
N/AN/AGPT-4V56.849.975.873.967.6-71.6
N/AN/AGemini Ultra59.453-----
N/AN/AGemini Pro47.945.273.674.364.3-70.7
1.4B50MQwen-VL-Plus45.243.3--55.7-65.7
1.5B5.12MCogVLM-30B32.1---56.8--
125M~1MYi-VL-34B45.9------
558K665KLLaVA-1.5-13B36.427.667.863.336.372.568.2
558K760KLLaVA-1.6-34B51.146.579.37957.489.675.9
For more results, please unfold to see expanded tables More benchmarks will be added soon.
$^\dagger$specialist Grey: supervised finetuned $^*$training image observed Normal text: zero-shot
ModelVQAv2GQAVisWizTextVQAScienceQA
GPT-4V77.2--78.0-
Gemini Ultra77.8--82.3-
Gemini Pro71.2--74.6-
PALI-X86.0$^\dagger$-70.9$^\dagger$71.4$^\dagger$-
CogVLM-30B83.4 (84.7$^\dagger$)65.2$^\dagger$76.4$^\dagger$68.1 (69.3$^\dagger$)92.7$^\dagger$
LLaVA-1.5-13B8063.353.661.3$^*$71.6
LLaVA-1.6-Vicuna-7B81.864.257.664.970.1
LLaVA-1.6-Vicuna-13B82.865.460.567.173.6
LLaVA-1.6-Mistral-7B82.264.860.065.772.8
LLaVA-1.6-34B83.767.163.869.581.8
Data (PT)Data (IT)ModelMMMU (val)MMMU (test)MathVistaMMB-ENGMMB-CNMM-VetLLaVA-WildSEED-IMGMMEPOPE
N/AN/AGPT-4V56.855.749.975.873.967.6-71.6--
N/AN/AGemini Ultra59.4-53-------
N/AN/AGemini Pro47.9-45.273.674.364.3-70.7--
1.4B50MQwen-VL-Plus45.240.843.3-------
1.5B5.12MCogVLM-30B32.130.1---56.8----
558K665KLLaVA-1.5-13B (336$^2$)36.433.627.667.863.336.372.568.21531/29585.93
558K760KLLaVA-1.6-Vicuna-7B (672$^2$)35.8-34.667.460.643.981.670.21519/33286.53
558K760KLLaVA-1.6-Mistral-7B (672$^2$)35.3-37.768.761.247.383.272.21498/32186.73
558K760KLLaVA-1.6-13B (672$^2$)36.2-35.37064.448.487.371.91575/32686.23
558K760KLLaVA-1.6-34B (672$^2$)51.144.746.579.37957.489.675.91631/39787.73
 

✨ Highlights:

  • SoTA Performance! LLaVA-1.6 achieves the best performance compared with open-source LMMs such as CogVLM or Yi-VL. Compared with commercial ones, it catches up to Gemini Pro and outperforms Qwen-VL-Plus on selected benchmarks.
  • Zero-shot Chinese capability. LLaVA-1.6’s Chinese capability is an emerging zero-shot capability (i.e., only English multimodal data is considered). Its performance on Chinese multimodal scenarios is surprisingly good, e.g., SoTA on MMBench-CN.
  • Low Training Cost. LLaVA-1.6 is trained with 32 GPUs for ~1 day, with 1.3M data samples in total. The compute / training data cost is 100-1000 times smaller than others.

Qualitative Results

Mark Zuckerberg talks about Llama-3 and 600K H100s

User: Explain this to me in detail.

LLaVA-1.6-34B

The image you've provided appears to be a screenshot of a social media post, specifically from Instagram, featuring a man speaking and a text overlay. The man in the image is wearing a black t-shirt and appears to be in the middle of a conversation or presentation. The text overlay is a quote from a post by a user named "zuck," which is presumably a reference to Mark Zuckerberg, the CEO of Meta (formerly known as Facebook).

The text overlay reads:

"Some updates on our AI efforts. Our long term vision is to build general intelligence, open source it responsibly, and make it widely available so everyone can benefit. We're bringing our two major AI research efforts (FAIR and GenAI) closer together to support this. We're currently training our next-gen model Llama 3, and we're building massive compute infrastructure to support our future roadmap, including 35k H100s by the end of this year — and overall almost 600k H100s equivalents of compute if you include other GPUs. Also really excited about our progress building new AI-centric computing devices like Ray Ban Meta smart glasses. Lots more to come."

This text provides insights into Meta's artificial intelligence (AI) initiatives. It mentions that the company's long-term goal is to develop general intelligence and make it accessible to everyone. The post also discusses the integration of Meta's two AI research efforts, FAIR (Facebook AI Research) and GenAI, to support this goal.

The text also mentions the training of a next-generation AI model called Llama 3 and the construction of massive compute infrastructure to support Meta's future plans. It highlights the company's investment in NVIDIA's H100 GPUs, with a total of 35,000 H100s by the end of the year and almost 600,000 H100s equivalents of compute if other GPUs are included.

Lastly, the post expresses excitement about Meta's progress in developing AI-centric computing devices, specifically Ray Ban Meta smart glasses. The post concludes with a statement that there is more to come, suggesting that Meta has additional AI-related announcements or developments in the pipeline.

The image of the man speaking could be from a video or a live stream where he is discussing these AI updates. The context of the image suggests that it is part of a larger conversation or announcement related to Meta's AI strategy and advancements.

When to leave to pickup based on flight info?

User: I need to pick up my wife. I live in San Jose. When should I leave?

LLaVA-1.6-34B

Based on the information provided in the image, the flight is scheduled to arrive at 11:51 AM at San Francisco International Airport (SFO). If you live in San Jose, you should consider the travel time between San Jose and San Francisco, which is approximately 45-60 minutes depending on traffic conditions.

To ensure you have enough time to pick up your wife, you should leave San Jose no later than 11:00 AM to account for traffic and any unexpected delays. However, it's always a good idea to leave earlier than necessary to ensure you arrive on time.


Detailed Technical Improvement

We detail the findings we have and improvements we make to LLaVA-1.5. More implementation details will be released and documented here in the coming days.

(1) Dynamic High Resolution

We design our model at high resolution with an aim to preserve its data efficiency. When provided with high-resolution images and representations that preserve these details, the model’s capacity to perceive intricate details in an image is significantly improved. It reduces the model hallucination that conjectures the imagined visual content when confronted with low-resolution images. Our ‘AnyRes’ technique is designed to accommodate images of various high resolutions. We employ a grid configuration of $\{2 \times 2, 1 \times \{2,3,4\}, \{2,3,4\} \times 1\}$, balancing performance efficiency with operational costs. See our updated LLaVA-1.5 technical report for more details.


Illustration of dynamic high resolution scheme: a grid configuration of $\\{2 \times 2\\}$

(2) Data Mixture

  • High-quality User Instruct Data. Our definition of high-quality visual instruction-following data hinges on two principal criteria: First, the diversity of task instructions, ensuring adequately represent a broad spectrum of user intents that are likely to be encountered in real-world scenarios, particularly during the model’s deployment phase. Second, the superiority of responses is critical, with the objective of soliciting favorable user feedback. To achieve this, we consider two data sources: (1) Existing GPT-V data. LAION-GPT-V and ShareGPT-4V. (2) To further facilitate better visual conversation for more scenarios, we collect a small 15K visual instruction tuning dataset covering different applications. The instructions and images come from LLaVA demo, which are real-world users requests. We carefully filter samples that may have privacy concerns or are potentially harmful, and generate the response with GPT-4V.
  • Multimodal Document/Chart Data. (1) We remove TextCaps from our training data as we realize that TextCaps uses the same set of training images as TextVQA. This allows us to better understand the zero-shot OCR capability of our model when evaluating TextVQA during development. To maintain and further improve our model’s OCR capability, we replace TextCaps with DocVQA and SynDog-EN. (2) Motivated by Qwen-VL-7B-Chat, we further add ChartQA, DVQA, and AI2D for better chart and diagram understanding.

(3) Scaling LLM backbone

In addition to Vicuna-1.5 (7B and 13B), we consider more LLMs, including Mistral-7B and Nous-Hermes-2-Yi-34B. These LLMs possess nice properties, flexible commercial use terms, strong bilingual support and larger language model capacity. It allows LLaVA to support a wider spectrum of users and more scenarios in the community. The LLaVA recipe works well with various LLMs, and scales up smoothly with the LLM up to 34B.

Model Card

NameLLaVA-1.6-7BLLaVA-1.6-13BLLaVA-1.6-34B
Model SizeTotal7.06B13.35B34.75B
Vision Encoder303.5M303.5M303.5M
Connector21M31.5M58.7M
LLM6.74B13B34.39B
Resolution336 x [(2,2), (1,2), (2,1), (1,3), (3,1), (1,4), (4,1)]
Stage-1Training Data558K
Trainable ModuleConnector
Stage-2Training Data760K
Trainable ModuleFull model
Compute (#GPU x #Hours)8x2016x2432x30
Training Data (#Samples)1318K

Team

  • Haotian Liu: University of Wisconsin-Madison
  • Chunyuan Li: Bytedance/Tiktok (Part of the work was done at Microsoft Research)
  • Yuheng Li: University of Wisconsin-Madison
  • Bo Li: Nanyang Technological University (Work collaborated with ByteDance/TikTok)
  • Yuanhan Zhang: Nanyang Technological University (Work collaborated with ByteDance/TikTok)
  • Sheng Shen: University of California, Berkeley
  • Yong Jae Lee: University of Wisconsin-Madison

Acknowledgement

  • A16Z Open Source AI Grants Program.
  • We thank Lianmin Zheng, Ying Sheng, Shiyi Cao for the integration of LLaVA to SGLang.
  • This work was supported in part by NSF CAREER IIS2150012, Microsoft Accelerate Foundation Models Research, and Institute of Information & communications Technology Planning & Evaluation(IITP) grants funded by the Korea government(MSIT) (No. 2022-0-00871, Development of AI Autonomy and Knowledge Enhancement for AI Agent Collaboration) and (No. RS-2022-00187238, Development of Large Korean Language Model Technology for Efficient Pre-training).

Citation

@misc{liu2024llava16,
    title={LLaVA-1.6: Improved reasoning, OCR, and world knowledge},
    url={https://llava-vl.github.io/blog/2024-01-30-llava-1-6/},
    author={Liu, Haotian and Li, Chunyuan and Li, Yuheng and Li, Bo and Zhang, Yuanhan and Shen, Sheng and Lee, Yong Jae},
    month={January},
    year={2024}
}

@misc{liu2023improvedllava,
      title={Improved Baselines with Visual Instruction Tuning}, 
      author={Liu, Haotian and Li, Chunyuan and Li, Yuheng and Lee, Yong Jae},
      publisher={arXiv:2310.03744},
      year={2023},
}

@misc{liu2023llava,
      title={Visual Instruction Tuning}, 
      author={Liu, Haotian and Li, Chunyuan and Wu, Qingyang and Lee, Yong Jae},
      publisher={NeurIPS},
      year={2023},
}