RadixArk is an infrastructure company focused on providing comprehensive computational systems for large-scale artificial intelligence workloads. The company specializes in building frontier-grade infrastructure spanning inference optimization, training systems, reinforcement learning, orchestration platforms, custom kernels, and multi-hardware support. As of 2026, RadixArk is in a significant growth phase, raising $100 million in seed funding to accelerate development and deployment of its technology stack 1)
RadixArk's infrastructure is built around two primary technical components. The company leverages SGLang, a specialized inference optimization framework designed to maximize throughput and efficiency in large language model deployment. SGLang provides structured generation capabilities and optimized batching mechanisms for production-scale inference workloads 2).
The second major component is Miles, a system architected for large-scale reinforcement learning (RL) and post-training operations. Miles addresses the computational and orchestration challenges associated with training models at frontier scales, where standard distributed training approaches encounter significant bottlenecks in coordination, gradient synchronization, and resource allocation.
RadixArk's platform addresses multiple layers of the AI infrastructure stack:
Inference Systems: The company provides optimized inference serving capabilities built on SGLang, enabling efficient deployment of large language models with reduced latency and improved throughput for production environments.
Training and Post-Training: RadixArk offers comprehensive support for supervised fine-tuning (SFT) and instruction tuning workflows, along with the computational infrastructure required for large-scale model training across distributed hardware configurations.
Reinforcement Learning Infrastructure: Through the Miles platform, RadixArk enables large-scale RL workloads including policy optimization, reward modeling, and agent training—critical components for developing advanced AI systems requiring interactive learning and optimization beyond standard supervised approaches.
Orchestration and Systems: The company provides orchestration layers for managing complex multi-stage training pipelines, resource allocation across heterogeneous hardware, and coordination between inference and training workloads in production environments.
Hardware Optimization: RadixArk develops custom computational kernels and provides abstraction layers enabling efficient operation across multiple hardware platforms, including GPUs from different manufacturers and specialized AI accelerators 3)
RadixArk operates in the AI infrastructure sector, competing alongside established players in distributed training, inference optimization, and MLOps platforms. The company's positioning emphasizes comprehensive coverage across the full spectrum of frontier AI workloads rather than specialization in individual components. The $100 million seed funding round reflects significant investor confidence in the market need for integrated infrastructure solutions addressing the computational challenges of modern large-scale AI systems.
The infrastructure space has become increasingly important as organizations push the boundaries of model scale and capability, requiring specialized systems optimized for efficiency, reliability, and cost-effectiveness across training, post-training, and inference phases.