2026
Dynamic Delayed Tree Expansion For Improved Multi-Path Speculative Decoding
Submitted to ICML 2026.
Knowing What You Know Is Not Enough: Large Language Model Confidences Don’t Align With Their Actions
Submitted to ICML 2026.
Greedy Multi-Path Block Verification for Faster Decoding in Speculative Sampling
Submitted to ICML 2026.
Global Resolution: Optimal Multi-Draft Speculative Sampling via Convex Optimization
Oral, ICLR 2026 (top 1% of papers).
2025
Privacy-Preserving Mechanisms Enable Cheap Verifiable Inference of LLMs
2024
vTune: Verifiable Fine-Tuning for LLMs Through Backdooring
The Third Workshop on New Frontiers in Adversarial Machine Learning, NeurIPS 2024.
Fixing Failure Modes of Preference Optimisation with DPO-Positive
Open Science for Foundation Models Workshop, ICLR 2025.
The work in this paper forms the core of the Smaug LLM model, which was the top open-source model on the HF LLM Leaderboards at launch, and remained so for over 2 months.
2023
Giraffe: Adventures in Expanding Context Lengths in LLMs
2018
2017
Understanding disentangling in β-VAE
Learning Disentangled Representations Workshop, NeurIPS 2017.