OB Ofir Bibi

Ofir Bibi

VP Research  ·  Lightricks

// Poke, tinker, prod and tweak. Repeat.

Building the foundation of generative media — from efficient open-source models to production systems reaching millions of creators worldwide.

About

Research & Leadership

Ofir Bibi is the VP Research at Lightricks, where he leads the development of generative video models and applications. His team built LTX-2, the first open-source audio-video foundation model, and now focuses on the application layers that bring it to market.

Ofir drives both technical innovation and open-source community strategy, bridging fundamental ML research with practical product deployment. Born and raised in Jerusalem, he's always been a tinkerer — taking apart radios and asking how things work. That instinct now drives some of the most efficient video generation models ever built.

Career
2015 – now
Lightricks — Researcher → Director → VP Research Built the team that delivered the LTX Models, the algorithmic core of Facetune and B2C products and the engineering foundations for ML workloads in cloud and mobile
2009 – 2015
BrightSource Energy — ML Engineer Developed ML systems for real-time optimization of utility-scale concentrated solar power plants
2009 – 2014
Hebrew University — PhD, Neural Computation Developed novel yet practical Estimation and Optimization methods for statistical system simulation and prediction
2006 – 2009
Hebrew University — BSc, Physics & CS Dean's list and multiple excellence awards
2001 – 2006
IDF — Team Leader & Project Manager Led engineering team in a technological unit, managing complex technical projects
Selected Work

The LTX Model Journey

From the world's first real-time open-source video model to a complete audio-video foundation model — a two-year sprint to the frontier of generative media.

JAN 2026 Flagship · Open Source

LTX-2

First open-source model to generate synchronized 4K audio and video

Announced with NVIDIA at CES 2026, LTX-2 was the first production-ready model to combine native audio and video generation with fully open weights, training code, and inference code — a milestone the industry called Lightricks' "DeepSeek moment." In human preference studies it performs comparably to Sora 2 and Veo 3, while running at 18× the speed of comparable open models. For the first time, anyone could train their own audio-visual IP directly into a foundation model.

MAY 2025 Breakthrough

LTX-Video 13B

Made high-quality AI video accessible on consumer hardware

The 13B model introduced multiscale rendering — drafting motion coarsely first, then progressively refining detail — achieving speeds 30× faster than competing models of comparable size. What previously required $10,000 enterprise GPUs now ran on a consumer RTX card: 37 seconds to generate what took rivals over 25 minutes.

NOV 2024 Open Source

LTX-Video

First open-source video model to run faster than real-time

LTXV was the first open-source video model to generate a 5-second clip in under 4 seconds — faster than the video plays back. Released with full weights and training code, it seeded a global developer community that reached 9,300+ GitHub stars and integrations into ComfyUI, Diffusers, and major creative platforms. Training on Google Cloud TPUs cut a six-month development timeline to four.

Research

Publications

LTX-2 Fig. 3 – AV attention maps
2026arXiv
arXiv 2026
LTX-2: Efficient Joint Audio-Visual Foundation Model
Ofir Bibi, Yoav HaCohen, Benny Brazowski, Nisan Chiprut, et al.
Read paper
JUST-DUB-IT Fig. 1 – joint audio-visual dubbing
2026arXiv
arXiv 2026
JUST-DUB-IT: Video Dubbing via Joint Audio-Visual Diffusion
Anthony Chen, Naomi Ken Korem, Tavi Halperin, Matan Ben Yosef, Urska Jelercic, Ofir Bibi, Or Patashnik, Daniel Cohen-Or
Read paper Project page
LTX-Video Fig. 2 – holistic denoising strategy
2025arXiv
arXiv 2025
LTX-Video: Realtime Video Latent Diffusion
Ofir Bibi, Yoav HaCohen, Nisan Chiprut, Benny Brazowski, Daniel Shalem, et al.
Read paper
SIGGRAPH 2021 2021
Endless Loops: Detecting and Animating Periodic Patterns in Still Images
Tavi Halperin, Hanit Hakim, Orestis Vantzos, Gershon Hochman, Netai Benaim, Lior Sassy, Michael Kupchik, Ohad Fried, Ofir Bibi
Read paper Project page
CGF 2019
Clear Skies Ahead: Towards Real-Time Automatic Sky Replacement in Video
Tavi Halperin, Harel Cain, Ofir Bibi, Michael Werman — Computer Graphics Forum, Vol. 38
Read paper Watch demo
ARMA Fig. 2 – best estimators
2013IEEE TSP
IEEE TSP 2013
Time Varying Autoregressive Moving Average Models for Covariance Estimation
Ofir Bibi, Ami Wiesel, Amir Globerson — IEEE Transactions on Signal Processing, Vol. 61, No. 11
Read paper
All publications on Google Scholar →
Press & Media

Talks & Coverage

Podcast
Code Story — Ofir Bibi, Lightricks
Jerusalem upbringing, love of photography, Lightricks' journey from on-device tricks to cloud foundation models, and the open-source strategy behind LTXV.
Listen
Interview
AiThority — VP Research at Lightricks on Generative AI
On generative AI transformation, responsible development, training data quality, and bridging fundamental research with product deployment at scale.
Read
Case Study
Google Cloud — How TPUs Cut Time-to-Market by 33%
"We achieved our goals of launching on time and being first to market, in large part thanks to TPU enabling us to innovate faster — turning a six-month project into four." — Ofir Bibi
Read
Podcast
Tech Talks Daily — Open Source Video and the Race for Faster Creativity
Deep dive on open-source strategy, training philosophy, data quality over quantity, and the road to multimodal foundation models.
Read