Build the Future of ML Optimization
We're looking for visionaries ready to shape next-generation machine learning frameworks that can scale across thousands of GPUs.
See Open PositionsOpen-Source Culture
Everything we build is open sourced
Continuous Learning
Paid time for side projects/conferences
Hackathons
Monthly team research sprints
Work with Top ML Researchers
You'll join a team of machine learning engineers-researchers building systems used by top AI teams worldwide. We blend academic innovation with enterprise-grade reliability.
- Contribute to the PyTorch ecosystem
- Flexible remote work environment
- 100% equity for researchers
Open Opportunities
Lead Distributed Systems Engineer
Building next-generation model parallelism systems with sub-millisecond communication latency.
Senior ML Framework Engineer
Designing core optimizations for PyTorch integration in distributed training systems.
ML Optimization Researcher
Researching novel algorithmic approaches to reduce memory usage and improve model convergence.
Join the Team
Application Process
-
1
Submit Your Application
We accept resumes, LinkedIn profiles, and GitHub portfolios
-
2
Initial Interview
30min with hiring manager and CTO
-
3
Technical Assessment
Real-world engineering or research challenges
-
4
Onsite Demo
Collaborate with core team members
-
5
Offer
We move fast - top candidates typically receive offers within 2 weeks
Send Us Your Resume
Include a brief message about why you'd want to work with us
Culture You'll Love
Flexible Hours
Set your own work hours with 22 days of remote work per month
Learning Stipend
$5000/year for books, courses, or certifications related to field
Hack Weekends
Two days each month dedicated to side projects and research

Join Researchers Like
Dr. Emma Rodriguez
CTO, FairScale
"Our team isn't just about code or papers - we push the boundaries of what's possible in machine learning. You'll work on problems that shape the future of AI."
Meet the teamReady to Shape the Future?
We're