• Lugh@futurology.todayOPM
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    9 days ago

    The model family is “a new suite of state-of-the-art multimodal models trained solely with next-token prediction,” BAAI writes. “By tokenizing images, text, and videos into a discrete space, we train a single transformer from scratch on a mixture of multimodal sequences”.

    Every single time it looks like closed Big Tech AI systems might steal a lead, open source is never far behind snapping at their heels. Now it seems it’s the same story with multi-modal AI.