I'm Saquib, a computer scientist, designer, and entrepreneur. 

Current roles: I serve as the CEO of Creative Crowdfunding Protocol PBC, an Andreessen Horowitz funded startup that builds tools and infrastructure for creative technologists and artists, bringing the power of crowdfunding to creators around the world. It is a subsidiary organization of Kickstarter. In addition, I founded Universal Machine Inc., a research and venture studio that builds startups. 

What excites me: I'm interested in research and technical leadership roles to solve visionary and challenging problems. My interdisciplinary projects explore the boundaries between science, design, and engineering. 

Short Bio: I am a builder at heart; I started learning to code without having access to a computer in Bangladesh, and eventually when I got my hands on one, I built and sold software to UK vendors at the age of 16. Over the years, I got into particle physics and astrophysics (built software tools for muon spectroscopy and SDSS cosmology survey), ML-capable hardware and sensor design, embodied mathematics, and web3 protocol design.

Impact and Fundraising: I have helped build six startups via my research and venture studio Universal Machine, working as the technical architect for each of them, and working with some amazingly talented cofounders in the process. The startups are in the areas of media/entertainment, IoT, climate, and transportation. Across all these startups, we have raised $45M in seed rounds, from some of the top VCs including Andreessen Horowitz and Polychain Capital.

I have advised and actively mentored some of the most successful tech startups in Bangladesh, helping them raise $25M so far. I also trained many Bangladeshi undergraduate students within a nonprofit organization that I founded, many are now PhD students in top CS and design programs/universities around the world.

Education: PhD in Human-Computer Interaction (HCI), MIT Media Lab, Massachusetts Institute of Technology. Advisors: Deb Roy (PhD), Sep Kamvar (MS). I worked at the Laboratory for Social Machines and Social Computing research groups. Previously, I studied Computational Engineering and Science, and Theoretical Physics.

Fellowships:  Wildflower Schools PhD Fellowship (2 years full funding), National Academy of Sciences (NAS) Sackler Fellowship, fully funded MS in Computational Engineering and Science (Scientific Computing and Imaging Institute), Distinguished Scientist Scholarship (4-year fully funded undergraduate program at Bard College, NY).

Academic research: Before the R&D and entrepreneurship life, I spent time in academia, where my research and patents incubated edtech startups, augmented reality products, as well as publications in top academic journals in HCI, AI, Physics, and History. I continue to publish original research and serve as a reviewer in top academic venues.

Selected Startups and Organizations

CEO (2023 - present). A subsidiary research organization of Kickstarter.com, supported by Andreessen Horowitz, Kickstarter, Union Square Ventures and others. Building crowdfunding and creativity tools for creators and small businesses around the world.

Cofounder and CTO. Bringing due diligence to the carbon credit market using our Open Carbon Protocol, with a mission to bend the carbon emission curve faster. Funded by Polychain Capital and others.

Mavu Labs

Cofounder and core contributor. Unlocking the future of work for 304M+ mobile browser users in Africa and India. In collaboration with the Opera Browser, IDEO, and Celo Foundation.

Cofounder and Core Contributor. Academic and industry research consortium for inventing new dynamic mediums. For many centuries, the language of math and science evolved around static mediums (such as paper and printing press). Dynamic Abstractions aim to change that for the interactive mediums that were invented in the last 25 years. Some of the core ideas originated from my PhD dissertation at MIT. 

Research Fellow and cofounder. Privacy-aware radio sensor network and computer vision solutions for personalized learning in early childhood classrooms. The product was born out of my MS thesis at MIT Media Lab, and is implemented by 60 schools in USA.

Chief Data Scientist (2017 - 2018), an early contributor. Data and AI tools for small business owners in Bangladesh. Products have currently reached 31M+ users (as of 2024).

Selected Research

E. E. Cummings poem illustration. It also represents an embodied algebraic expression.

Embodied Mathematics by Interactive Sketching (PhD Dissertation)

My PhD dissertation is now available. It lays out interaction designs, design framework, data structures, and algorithms for sketching embodied representations of algebra.

PhD Dissertation pdf


The dissertation was nominated by the MIT Media Lab for ACM SIGCHI Outstanding Dissertation Award.

Constructing Embodied Algebra by Sketching

Nazmus Saquib, Rubaiat Habib, Li-Yi Wei, Gloria Mark, Deb Roy 

CHI 2021 Paper

Mathematical models and expressions traditionally evolved as symbolic representations, with cognitively arbitrary rules of symbol manipulation. The embodied mathematics philosophy posits that abstract math concepts are layers of metaphors grounded in our intuitive arithmetic capabilities, such as categorizing objects and part-whole analysis. 

We introduce a design framework that facilitates the construction and exploration of embodied representations for algebraic expressions, using interactions inspired by innate arithmetic capabilities. We instantiated our design in a sketch interface that enables construction of visually interpretable compositions that are directly mappable to algebraic expressions and explorable through a ladder of abstraction. 

The emphasis is on bottom-up construction, with the user sketching pictures while the system generates corresponding algebra. We present diverse examples created by our prototype system. A coverage of the US Common Core curriculum and playtesting studies with children point to the future direction and potential for a sketch-based design paradigm for mathematics.

Interactive Body-driven Graphics for Augmented Video Performance

Nazmus Saquib, Rubaiat Habib, Li-Yi Wei, Wilmot Li (CHI 2019 paper)

Augmented and mixed-reality technologies enable us to enhance and extend our perception of reality by incorporating virtual graphics into real-world scenes. One simple but powerful way to augment a scene is to blend dynamic graphics with live action footage of real people performing. This technique has been used as a special effect for music videos, scientific documentaries, and instructional materials incorporated in the post-processing stage.

As live-streaming becomes an increasingly powerful cultural phenomenon, this work explored how to enhance these real-time presentations with interactive graphics to create a powerful new storytelling environment. Traditionally, crafting such an interactive and expressive performance required technical programming or highly-specialized tools tailored for experts. 

Our approach is different, and could open up this kind of presentation to a much wider range of people. Our system leverages the rich gestural (from direct manipulation to abstract communications) and postural language of humans to interact with graphical elements. By simplifying the mapping between gestures, postures, and their corresponding output effects, our UI enables users to create customized, rich interactions with the graphical elements.


News Coverage:


Impact: To date, an eclectic group of users used the prototype to ideate and create demos. One patent has been filed, and there is a work-in-progress livestream Adobe product being developed based on this work.


Supplementary Examples: A wide range of users could use the system to make an impressive array of examples and possibilities. Some examples can be seen here: cooking instruction video, astronomy research presentation, interior design, meditation tutorial.

Sensei: Sensing Educational Interaction

Nazmus Saquib, Ayesha Bose, Dwyane George, Sepandar Kamvar. Sensei: Sensing Educational Interaction. Ubicomp 2018 (IMWUT). (pdf)


Sensei (Sensing Educational Interaction) is a dynamic range based distributed sensor network that aid the observation process in early childhood Montessori classrooms. The system is currently deployed in Wildflower Schools. In a busy classroom, teachers hardly have time to observe each and every child to learn about their needs. Sensei helps teachers make sense of their classrooms. 

Unobtrusive sensors provide a way to tap into classroom interactions data that can be interpreted with machine learning algorithms to provide insights to teachers that would otherwise have been lost from a busy classroom. The observation tools create a novel human-machine interface to empower teachers and students in personalizing the curriculum.

Proximity sensing radio sensors are embedded in children’s shoes, learning materials, and selected landmarks in the classrooms. By logging proximity data, we can reconstruct the daily social network, teacher-student time distribution, and learning time. Based on this data, we provide unique insights to each teacher about their teaching style and the time they spend with each child. We can also understand the kind of lessons a child is interested in. A visualization dashboard lets a teacher explore such data and compare with his/her own intuition about the classroom.

This novel way to understand classroom dynamics has already helped teachers make better sense of what each child needs, as evident from our deployment and interviews at different schools.


News Coverage


Impact

The work led to the establishment of Wildflower School's Innovation Lab, which acquired $3M funding from Chan-Zuckerberg Initiative and Omidyar Network, among others, to scale and deploy Sensei in Montessori schools across USA.

Swarm Communication to Track Montessori Learning Materials

Nazmus Saquib, Deb Roy. Children-Centered Sensing in Early Childhood Classrooms. CHI 2018 EA. (pdf)


A prototype and a case study in Montessori philosophy inspired sensing is presented: designing unobtrusive sensor networks to understand and reflect on a child's learning progress, by instrumenting existing Montessori learning materials using distributed sensing techniques. Swarm robots communicate and collaborate in teams by using IR and other communication methods. The same techniques can be used to track the compositionality of blocks and other Montessori learning materials. In this work, I develop the necessary hardware and sensing techniques.

Digital Humanities: Middle Eastern History Analytics

Nazmus Saquib, Mairaj Syed, Danny Halawi


We retrace early middle-eastern history by mining 1400 ancients text books and creating a citation network of 50k scholars (3M edges) who passed down historical statements over 350 years, from 632 AD to 1000 AD. By reconstructing the citations and using biographical information of these people, we are able to reveal unique insights about the nature of scholarship, biases and information diffusion in the early stages of middle-eastern history.


I founded and co-led the project along with Mairaj Syed, an associate professor of religious studies at UC Davis. This is an ongoing research project that started in 2011, which now has a formal name: Hikmah Lab


Impact: The project has garnered significant attention in the historian and digital humanities communities. For example, the digital humanities community has organized a full conference dedicated to the technical methods and analysis developed by my team, to be held in January 2021. The flyer for the conference can be found here.


Grant: Middle East in the Wider World Grant from UC.

Placelet: Big Data for Small Places

Nazmus Saquib, Elizabeth Christoforetti, Sep Kamvar (MIT Media Lab Technical Report)

Placelet is a foot traffic analytics system utilizing privacy-aware computer vision. The system was designed and developed to collect data on pedestrian behavior in an urban setting, with an aim to understand economic activity in an area. Video stream is analyzed in real time to produce (intermediate) contour data using optical flow, and the contour results are uploaded to a server for further analysis. No video is saved on device. I led the technical team for the project and contributed to ideation and design, firmware for data collection schedule and network protocol, and the video analysis algorithms. 

Impact: The system was deployed in three stores in Downtown Boston, and also in several public places in collaboration with the Boston City Council. Additional sensor units capable of capturing noise and air quality were also constructed as part of the project.

Grant: The project won a Knight Foundation prototype grant in 2015.

News Coverage: Boston Globe, Fast Company

Screen Balancer: Balancing Phenotypic Screen Time in Live Telecasts

Naimul Hoque, Nazmus Saquib, Syed Masum Billah, Klaus Mueller (CSCW 2020 Paper, video)


Several prominent studies have shown that the imbalanced on-screen exposure of observable phenotypic traits like gender and skin-tone in movies, TV shows, live telecasts, and other visual media can reinforce gender and racial stereotypes in society. Researchers and human rights organizations alike have long been calling to make media producers more aware of such stereotypes. While awareness among media producers is growing, balancing the presence of different phenotypes in a video requires substantial manual effort and can typically only be done in the post-production phase. The task becomes even more challenging in the case of a live telecast where video producers must make instantaneous decisions with no post-production phase to refine or revert a decision. In this paper, we propose Screen-Balancer, an interactive tool that assists media producers in balancing the presence of different phenotypes in a live telecast. The design of Screen-Balancer is informed by a field study conducted in a professional live studio. Screen- Balancer analyzes the facial features of the actors to determine phenotypic traits using facial detection packages; it then facilitates real-time visual feedback for interactive moderation of gender and skin-tone distributions. Our user study revealed that the participants were able to reduce the difference of screen times between male and female actors by 43%, and that of light-skinned and dark-skinned actors by 44%, thus showing the promise and potential of using such a tool in commercial production systems. 

Research Projects in Social Justice, Equity, and Implicit Bias

Here are a few papers from my research mentorship program that were published in NeurIPS workshops (Machine Learning for the Developing World, ML4D). These projects dealt with bias, equity, and social justice in the context of Bangladesh, and were done under my active mentorship and collaboration. The papers utilized computer vision, NLP, and network science methods.

Skyview and Quality of Living in Dhaka, Bangladesh

We show that people in lower economy classes may suffer from lower sky visibility, whereas people in higher economy classes may suffer from lack of greenery in their environment, both of which could be possibly addressed by implementing rent restructuring schemes.  (paper, NeurIPS Workshop 2018)

Gender Portrayal in Bangladeshi TV

We demonstrate a noticeable discrepancy in female screen presence in Bangladeshi TV advertisements and political talk shows. Further, contrary to popular hypotheses, we demonstrate that lighter-toned skin colors are less prevalent than darker complexions, and additionally, quantifiable body language markers do not provide conclusive insights about gender dynamics. Overall, these gender portrayal parameters reveal the different layers of onscreen gender politics and can help direct incentives to address existing disparities in a nuanced and targeted manner.  (paper, NeurIPS Workshop 2017)

Political Statements Echo Chambers

Our research results indicate there is a presence of cliquishness between powerful political leaders when it comes to their appearance in news. We also show how these cohesive cores form through the news articles, and how, over a decade, news cycles change the actors belonging in these groups. (paper, NeurIPS Workshop 2018)

Other Projects

My Wedding Card

My data visualization-based wedding card design was picked up and featured by the most widely read Bangladeshi daily newspaper Prothom Alo.

A Primitive Radar

I built a primitive Synthetic Aperture Radar (SAR) during the MIT IAP 2018 at the Lincoln Lab. This is a picture of the radar experiments from the Stata Center.

Data Visualization Book

I wrote a book on scientific data visualization using Mathematica, specifically targeting programmers who are from science/engineering backgrounds. It details tricks and tips I learned over a decade of using Mathematica. It was published in 2014, has a GoodReads rating 4.33/5.

VR Sculpting and Painting

I have been sculpting VR worlds for the last two years. It is simply a great relaxing experience, I highly recommend it! The equipment and apps are usually Oculus Quest with Tilt Brush and Medium, but I work with Gravity Sketch too. Find out more in the Art page!

Muon Particle Beam Simulation

I worked at the Jefferson Particle Physics Accelerator Lab and the College of William and Mary during my undergraduate years to model and simulate how muon particles pass through crystal structures. The project also became my senior thesis for my BA in Physics. Publication, Senior Project

Uncertainty Visualization

My MS thesis at Scientific Computing and Imaging Institute developed a mathematical shape matching pipeline for visualizing intrinsic curvature changes in isosurfaces of PDE solution fields. The pictures show applications in molecular dynamics, predicting charge distribution variation using isosurface curvature changes. Thesis pdf, defense slides.