Kushagra Tiwary

Kushagra "Kush" Tiwary

Email: ktiwary@mit.edu, LinkedIn: ktiwary

I am a 2nd year PhD student in the Camera Culture group at the MIT Media Lab, advised by Ramesh Raskar.

My research focuses on AI-based computational discovery specifically for vision. I recieved my S.M. from MIT (2023) and BS ECE from Univeristy of Illinois at Urbana-Champaign (2019). My CV is available here.

I am also a volunteer for the Media Lab's SOS Program and the EECS Graduate Application Assistance Program (GAAP). I encourage prospective students to check it out.

If you're interested in working with me, please reach out via email.

Working Snapshots1

What if Eye...? Computational Recreating the Evolution of Vision
We created a virtual petri dish where digital creatures evolve eyes from scratch, replaying millions of years of evolution.
Generative Design of Visual Intelligence
Perspective Paper on using biological principles (natural evolution) to 1) Study Evolution of Vision and 2) Design more intelligent vision, i.e. a Cambrian explosion of artificial vision.

PUBLICATIONS +

Computational Discovery

What if Eye...? Computational Recreating the Evolution of Vision
Tldr: We created a virtual petri dish where digital creatures evolve eyes from scratch, replaying millions of years of evolution.
with Aaron Young, Dan-Eric Nilsson, Tomaso Poggio, Brian Cheung, Ramesh Raskar
Generative Design of Visual Intelligence
Tldr: Can we use biological principles (natural evolution) to 1) Study Vision Evolution and 2) Design new forms of vision?
with Aaron Young, Brian Cheung, Dan-Eric Nilsson, Tomaso Poggio, Ramesh Raskar
DISeR: Designing Imaging Systems with Reinforcement Learning
Tldr: Can we use reinforcement learning to design new imaging systems?
with Tzofi Klinghoffer*, Nikhil Behari, Bhavya Agrawalla, Ramesh Raskar

Computer Vision & Machine Learning

Bridging the Data Provenance Gap Across Text, Speech, and Video
Tldr: Largest and first-of-its-kind longitudinal audit of text, speech, and video datasets used to train AI models
with Shayne Longpre, Nikhil Singh, Manuel Cherep, Joanna Materzynska, Sara Hooker, Jad Kabbara
DecentNeRFs: Decentralized Neural Radiance Fields from Crowdsourced Images
Tldr: Can we use crowdsourced images to create 3D models of the world?
with Zaid Tasneem, Akshat Dave, Ashok Veeraraghavan, Ramesh Raskar
Objects as Radiance Field Cameras
Tldr: Can we convert arbitrary objects into cameras to recover 3D?
with Akshat Dave, Nikhil Behari, Ramesh Raskar

NEWS +

Dec 2024: Article on our work on the origin of data used to build AI models was featured in the MIT Technology Review.
Sep 2024: Submitted our paper "Bridging the Data Provenance Gap Across Text, Speech, and Video" to ICLR 2025.
Sep 2024: Co-organizing the Neural Fields Beyond Conventional Cameras workshop at ECCV in Milan, Italy.
Sep 2024: Gave a TEDx Boston talk on the topic of Can AI Recreate 500 Million Years of Vision Evolution?.
Aug 2024: Interview at The Globe and Mail on the topic of AI Generated Video.
Aug 2024: Contributed to the paper: Consent in Crisis covered by The New York Times, 404 Media, Vox, and Yahoo! Finance.
Jun 2024: Our paper "DecentNeRFs: Decentralized Neural Radiance Fields from Crowdsourced Images" was accepted at ECCV 2024.
Apr 2024: Participated in the panel discussion "Frontiers of AI Research from Current MIT PhDs" at the Imagination in Action event at MIT, available on YouTube.
Mar 2024: On the student search committee for the AI & Human Experience Faculty Search.
Jan. 2024: Awarded an MIT Generative AI grant to research Generating New Forms of Visual Intelligence, as featured in MIT News.

MY BLOGS +

PERSONAL +

Footnotes

1. Publications and paper timelines are coarse snapshots about someone's research - often times dependent on accept/reject decisions that work against creativity and novelty. Working Snapshots is my effort to create a more granular snapshot: it only shows my active research direction vectors.