Welcome

Hi there đź‘‹, I'm Anxhelo Xhebraj, a compiler developer at NVIDIA working on DSLs, compilers for ML, and large scale training.

I was a Ph.D. Student at Purdue with Tiark Rompf and an Intern in the Swift Performance Team at Apple (Summer 2022).
Earlier I worked as a Machine Learning Engineer at Translated with Sébastien Bratières on Machine Translation (ModernMT).

github | twitter | sigplan

Ads

"Scaling Deep Learning Training with MPMD Pipeline Parallelism". A. Xhebraj, S. Lee, H. Chen, V. Grover. [doi]

"Flan: An Expressive and Efficient Datalog Compiler for Program Analysis". S. Abeysinghe, A. Xhebraj, T. Rompf. (POPL24; Distinguished Paper) [doi]

"Specializing Data Access in a Distributed File System (Generative Pearl)". P. Das, A. Xhebraj, T. Rompf. (GPCE24) [doi]

"What If We Don’t Pop the Stack? The Return of Second-Class Values". A. Xhebraj, O. Bračevac, G. Wei, T. Rompf. (ECOOP22) [doi] [github]

Teaching and Service