EraDrive’s goal is to revolutionize how spaceflight is done and managed through autonomy. EraDrive builds autonomy software and hardware that let spacecraft see, decide, and act with minimal human control. Our software is already flying in space, and we are growing it into a product family that covers rendezvous, on-orbit inspection, and space based sensing. You will be one of EraDrive’s first engineering hires and a founding member of the team.
You will design and ship the learning-based components of the autonomy stack, building models that enable perception, guidance, and decision-making under real-world space mission constraints. This role is for engineers who want to see models used in closed-loop autonomy, not just offline benchmarks. If you like turning data, physics, and machine learning into verifiable and reliable flight-ready code, this is your role.
What you will do
Develop and train AI/ML models for satellite autonomy, for satellite rendezvous and proximity operations, on-orbit inspection, space domain awareness, and formation flying applications
Design AI-powered guidance, navigation and perception components such as pose estimation, anomaly detection, behavior classification, and trajectory planning networks
Develop LLM/VLM-based interfaces for satellite operations to enable natural language interfaces and scene interpretation
Build and maintain synthetic data pipelines that capture realistic dynamics, visual conditions, and sensor properties, including domain randomization, simulation-to-real strategies, and large-scale dataset generation
Create and extend simulation environments used for model training, validation, stress testing, and adversarial evaluation
Optimize models for onboard deployment, including quantization, pruning, batching strategies, and real-time inference on constrained compute
Integrate ML modules into flight software through clean C++ and Python interfaces, ensuring controlled model usage and robust fallback behavior
Support software-in-the-loop and hardware-in-the-loop testing to close the loop between models, real sensors, and control logic, and feed the results back into model updates
Own your models end to end, from first prototype through training pipelines, internal review, regression tests, and delivery
What you bring
Strong background in machine learning, deep learning, reinforcement learning, or a closely related field
Hands-on experience building, training, and evaluating AI/ML models, especially related to perception, estimation, planning, or control
Proficiency with modern ML frameworks (e.g. PyTorch, TensorFlow) with the ability to integrate with Python and C++
Experience developing simulation or data generation tools to support ML training and validation
Ability to design and debug training pipelines, from dataset curation through distributed training and model evaluation
Familiarity with integrating ML inference models into larger autonomy or robotics systems
Ability to work in a small, fast-moving team, take clear ownership, and communicate tradeoffs honestly
Bonus
Experience with training and testing of foundation models
Knowledge of orbital mechanics and spacecraft dynamics
Experience integrating learned perception or decision models with classical estimation or control loops
Background in sim-to-real transfer, domain adaptation, or real-world deployment of learned models
Experience with embedded inference, GPU optimization, or real-time autonomous systems
Publications or open-source work in machine learning, robotics, reinforcement learning, or autonomous systems
