Machine learning models are driving our cars, testing our eyesight, detecting our cancer, giving sight to the blind, giving speech to the mute, and dictating what we consume, enjoy, and think. These AI systems are already an integral part of our lives and will shape our future as a species.
Soon, we'll conjure unlimited content: from never-ending TV series (where we’re the main character) to personalised tutors that are infinitely patient and leave no student behind. We’ll augment our memories with foundation models—individually tailored to us through RLHF and connected directly to our thoughts via Brain-Machine Interfaces—blurring the lines between organic and machine intelligence and ushering in the next generation of human development.
This future demands immense, globally accessible, uncensorable computational power. Gensyn is the machine learning compute protocol that translates machine learning compute into an always-on commodity resource—outside of centralised control and as ubiquitous as electricity—accelerating AI progress and ensuring that this revolutionary technology is accessible to all of humanity through a free market.
AUTONOMY
FOCUS
REJECT MEDIOCRITY
Train highly distributed models over uniquely decentralised and heterogeneous infrastructure, rather than GPU clusters.
Research novel model architectures - design, build, test, and iterate over completely new ways of building neural networks; with an eye towards achieving byzantine tolerance in a trustless compute setting.
Publish & collaborate - write research papers targeting top-tier AI conferences such as AAAI, ICML, IJCAI and NIPS, and collaborate with experts from universities and research institutes.
Engineering support - work with the engineering team on wider issues concerning ML (e.g. reproducible training).
Follow best practices - build in the open with a keen focus on designing, testing, and documenting your code.
Write & engage - contribute to technical reports/papers describing the system and discuss with the community.
Extremely strong research background - with publications at major machine learning conferences (or commensurate industrial experience).
Strong background in machine learning and distributed systems.
Hands-on experience with distributed model training.
Highly self-motivated with excellent verbal and written communication skills.
Comfortable working in an applied research environment - with extremely high autonomy and unpredictable timelines.
Communication backend experience - e.g. NCCL, GLOO and MPI.
Experience training Large Language Models (LLMs).
Competitive salary + share of equity and token pool.
Fully remote work - we hire between the West Coast (PT) and Central Europe (CET) time zones.
4x all expenses paid company retreats around the world, per year.
Whatever equipment you need.
Paid sick leave.
Private health, vision, and dental insurance - including spouse/dependents only.