Senior Research Scientist at Meta
I work on machine learning systems, recommendation, and AI infrastructure. My background is in high-performance computing and machine learning, and I am interested in the systems and algorithms that make modern ML efficient, scalable, and useful in practice.
Before Meta, I received my Ph.D. from
Georgia Tech , advised by
Umit V. Catalyurek,
where I worked on high-performance computing, GPU systems, parallel algorithms, and scalable machine learning.
Building next-generation recommendation models and ML infrastructure powering Meta's core ranking and personalization systems at scale.
Model–infrastructure co-design for billion-scale ad recommendation. Built real-time graph integration improving data freshness from days to minutes, boosted training throughput by 20%, and cut feature storage cost by 3x. Lead contributor to the Ads Graph Foundational Model (GFM).
Conducted research on scalable graph-based models for efficient learning without sacrificing quality. Work published at AAAI 2025 and ICLR 2024.
Extended caching mechanisms for Meta's ML training platform, substantially reducing redundant data serving computations for key models.
Built caching infrastructure for Meta's data ingestion pipelines, eliminating redundant computation across large-scale model training runs.
Distributed graph algorithms and high-performance data structures on the SHAD framework.
Improved Facebook's graph engine performance via novel partitioning — 10% query throughput gain, up to 5x end-to-end speedup. Integrated the engine with Instagram Ads; infrastructure still serves Instagram, Threads, and Facebook.
Built collaborative pathway editing tools for cBioPortal for Cancer Genomics (Memorial Sloan Kettering Cancer Center).
Cloud data transfer and object recognition apps (IBM Cloud, Python, Kafka).
Georgia Institute of Technology
Bilkent University, Turkey
ICML 2025 · ICLR 2025 · NeurIPS 2024 · KDD 2024 · WSDM 2024 · LoG 2024