Bit ByteDance has partnered with Zhejiang University to launch Vista-LLaMA, a multimodal large language model designed for video content understanding and capable of outputting high-quality video descriptions. Through innovative visual and verbal token processing, Vista-LLaMA solves the problem of “hallucinations” in video content.
Vista-LLaMA excels in multiple open video Q&A benchmarks, especially in the NExT-QA and MSRVTT-QA tests. It achieved an accuracy rate of 60.7% in the zero-shot NExT-QA test and 60.5% in the MSRVTT-QA test, surpassing all current SOTA methods. These results demonstrate the efficiency and accuracy of Vista-LLaMA in video content understanding and description generation.
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
ByteDance and Zhejiang University jointly launched Vista-LLaMA, a multimodal large language model that can interpret video content
Bit ByteDance has partnered with Zhejiang University to launch Vista-LLaMA, a multimodal large language model designed for video content understanding and capable of outputting high-quality video descriptions. Through innovative visual and verbal token processing, Vista-LLaMA solves the problem of “hallucinations” in video content.
Vista-LLaMA excels in multiple open video Q&A benchmarks, especially in the NExT-QA and MSRVTT-QA tests. It achieved an accuracy rate of 60.7% in the zero-shot NExT-QA test and 60.5% in the MSRVTT-QA test, surpassing all current SOTA methods. These results demonstrate the efficiency and accuracy of Vista-LLaMA in video content understanding and description generation.