Find The RightJob.
At Fluidstack, we build the compute, data centers, and power that will fuel artificial superintelligence. We supply GWs of compute capabilities to the world’s biggest AI Labs at industry-defining speeds.
Our team is small, fast, and obsessed with quality. We own outcomes end-to-end, challenge assumptions, and treat our customers' problems as our own. No task is beneath anyone here.
There are a few thousand people who will shape the trajectory of superintelligence. Come and be one of them.
Fluidstack is seeking a Network Engineer, Design & Engineering to join our Network Engineering team. This is a design-ownership role: you will take customer requirements — GPU shape, workload profile, scale targets, tenancy model — and produce end-to-end network architectures that are deployable, validated, and optimized for AI training and inference workloads.
This is not a traditional network engineering role. You will own the full design problem space: reasoning through topology selection, rack layout implications, power and thermal constraints, cable plant feasibility, and fabric scaling — all the way from requirements intake through design documentation that deployment teams execute against. Each customer engagement may involve a different GPU platform, a different network topology, and a different set of physical constraints. You must be able to reason from first principles through novel design challenges rather than pattern-match to a single reference architecture.
You will work closely with cross-functional partners in Hardware, DC Operations, ICT/Structured Cabling, Software Engineering, and Validation to ensure your designs are not just technically sound but physically buildable and operationally sustainable. Success means producing network designs that deployment teams can execute without ambiguity, that scale to the customer’s target, and that meet performance requirements on the first turn-up.
End-to-End Network Design: Own the design lifecycle from customer requirements through deployable architecture. Produce topology designs, IP/addressing schemes, routing policy, and fabric configuration specifications for AI training and inference fabrics. Design front-end (out-of-band management, customer access), back-end (GPU-to-GPU training fabric), and storage network architectures.
Multi-Customer Architecture Adaptability: Design network architectures that adapt to different GPU platforms (NVIDIA, AMD, custom accelerators), server form factors, and workload profiles. Each customer engagement may require a different rack layout, power envelope, cable infrastructure approach, and fabric topology.
Physical Infrastructure Integration: Translate logical network designs into physical reality. Work cross functionally on rack elevation planning, power distribution constraints, structured cabling architecture (fiber trunk design, patch panel layouts, cable pathway routing), and cooling/airflow considerations that impact network equipment placement. Ensure designs are buildable within the physical constraints of each facility.
Design Documentation & Handover: Produce comprehensive design packages that enable deployment teams to execute independently. This includes High-Level Designs (HLDs), Low-Level Designs (LLDs), cutsheet specifications, bill of materials, cabling matrices, and design decision records. Your documentation is the contract between design intent and deployment execution.
RDMA & High-Performance Fabric Design: Design lossless Ethernet fabrics optimized for RDMA (RoCEv2) workloads including PFC configuration, ECN tuning, traffic class design, and congestion management. Understand the relationship between fabric topology, ECMP behavior, and collective communication patterns in distributed training workloads.
Cross-Functional Design Collaboration: Partner with Hardware Engineering on server/GPU platform integration, DC Operations on facility constraints and power planning, ICT on structured cabling feasibility and fiber budgets, Software Engineering on automation requirements and DCIM data modeling, and Validation teams on test plans and acceptance criteria. Your designs must satisfy constraints across all of these domains.
Design Review & Standards: Participate in and lead design review sessions. Contribute to the development of reference architectures, design standards, and reusable design patterns that accelerate future deployments. Challenge assumptions — both your own and others’ — to ensure designs are technically rigorous and operationally sound.
Design-First Network Engineer: 5+ years of network engineering experience with a demonstrated focus on network design and architecture rather than purely operational roles. You’ve designed datacenter network fabrics from requirements through deployment — not just configured them. You can articulate why a design decision was made, what tradeoffs were considered, and what constraints drove the outcome.
Deep L1–L3 Expertise: Strong command of datacenter network fundamentals including CLOS/fat-tree topologies, BGP (eBGP underlay, iBGP/eBGP overlay), EVPN/VXLAN, IP addressing and subnetting at scale, and physical layer design (optics selection, fiber types, link budgets). You understand how L1 decisions cascade into L2/L3 behavior and design accordingly.
RDMA & AI Fabric Understanding: Working knowledge of RDMA network design (InfiniBand and/or RoCEv2), lossless Ethernet configuration (PFC, ECN, DCQCN), and the network performance requirements of distributed AI training workloads. You understand why fabric design decisions directly impact training job completion time.
GPU Cluster Architecture Exposure: Experience designing networks around specific GPU platforms (NVIDIA DGX/HGX, AMD MI-series, custom accelerator platforms). Understanding of how GPU topology, NVLink/NVSwitch architecture, and host networking configuration interact with fabric design.
Physical Infrastructure Fluency: Ability to reason about network design in the context of physical constraints. You’ve worked through rack layout planning, power budget allocation, structured cabling architecture, and equipment placement decisions. You don’t design networks in a vacuum — you understand that every logical decision has a physical consequence.
First Principles Thinker: You break complex design problems into fundamental components and reason through them systematically. When faced with a new GPU platform, an unfamiliar facility constraint, or a novel customer requirement, you decompose the problem rather than reaching for the nearest template. You challenge assumptions — including your own — and can defend your design decisions with rigorous reasoning.
Documentation Rigor: You produce design documentation that is clear, complete, and actionable. Your HLDs and LLDs enable deployment teams to execute without requiring you in the room. You see documentation as a design artifact, not an afterthought.
Cross-Functional Collaboration: Excellent at working across engineering disciplines. You communicate design intent clearly to non-network stakeholders (hardware, facilities, cabling) and incorporate their constraints into your designs. You earn trust through technical depth and follow-through.
Hyperscale or Large-Scale Design Background: Experience designing networks at hyperscale companies (Meta, Google, Microsoft, AWS) or large AI infrastructure providers. You’ve seen what disciplined design processes look like at scale and can adapt those patterns to a fast-growing startup.
Multi-Vendor Platform Experience: Deep familiarity with multiple network hardware platforms (Arista, Juniper, NVIDIA/Mellanox, Broadcom-based). Experience designing for specific platform capabilities and constraints, including ASIC-level considerations that impact fabric design.
Automation-Aware Design: Experience designing networks with automation in mind — consistent naming conventions, structured data models, templatable configurations. You may not write the automation yourself, but you design architectures that are automatable by default.
WAN & Interconnect Design: Experience with WAN topology design, DCI (Data Center Interconnect), optical transport, and backbone network architecture. Understanding of how campus/datacenter design connects to broader network infrastructure.
Startup Experience: You’ve built something from scratch before — ideally in a high-growth infrastructure or cloud company. You’re comfortable with rapid context switching, evolving requirements, and the intensity of early-stage company building.
Competitive total compensation package (salary + equity).
Retirement or pension plan, in line with local norms.
Health, dental, and vision insurance.
Generous PTO policy, in line with local norms.
The base salary range for this position is $200,000 – $275,000 per year, depending on experience, skills, qualifications, and location. This range represents our good faith estimate of the compensation for this role at the time of posting. Total compensation may also include equity in the form of stock options.
We are committed to pay equity and transparency.
Fluidstack is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Fluidstack will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.
You will receive a confirmation email once your application has successfully been accepted. If there is an error with your submission and you did not receive a confirmation email, please email careers@fluidstack.io with your resume/CV, the role you've applied for, and the date you submitted your application- someone from our recruiting team will be in touch.
© 2026 Qureos. All rights reserved.