AI Cloud

Browser-Based High-Performance Computing For AI Research

Revolutionize your research and innovation with CR8DL AI Cloud, offering a comprehensive suite of high-performance computing domain-specific solutions engineered to meet the needs of researchers and organizations, driving breakthroughs and accelerating discovery.

Accelerate discovery

CR8DL AI Cloud

The power of supercomputing, accessible through your browser, offering unified, computationally intensive workspaces.

Origin

An intuitive web-based portal that provides simple and scalable access to high-performance computing tools and resources from a single dashboard.

Base

High-performance computing infrastructure-as-a-Service (IaaS) providing robust compute, storage, and network capabilities.

Explore

A discovery workspace with no-code, low-code, and full-code HPC tools, focused on molecular biology, quantum simulation, image processing, and more. Offering pre-configured containers for common AI frameworks to accelerate research, development, and training processes.

Scale

The AI Platform-as-a-Service (PaaS) provides on-demand, scalable high-performance computing resources ideal for AI and machine learning tasks. Accessible from anywhere, it eliminates the need for costly hardware investments, streamlining AI implementation.

Redefining AI Cloud: The CR8DL Advantage

Accessibility

BROWSER-BASED HPc PLATFORM.

Scalability

SCALE UP OR DOWN WITH EASE.

Transparency

ONLY PAY FOR WHAT YOU USE. nO HIDDEN FEES.


CR8DL AI Infrastructure

The CR8DL AI infrastructure is housed in a secure and sustainable private data center and includes high-capacity and redundant network and storage resources to ensure reliable and uninterrupted service.

CR8DL Base GPU Cluster

Node and Cluster Technical Specifications

Compute Nodes
  • CPU – AMD EPYC 7713 Processor
    Sockets – 2
  • Cores/Threads – 64 Cores/128 Threads per socket
  • L3 Cache – 256MB per socket
  • Clock – 2.0Ghz boost to 3.675Ghz
  • Internal Storage – 46TB over 6 x 7.7TB NVMe
  • Network – 16 x 100G Ethernet/InfiniBand
Node Accelerators – GPU
  • 8x NVIDIA A100/80GB RAM
  • HGX backplane
  • 600GB/s inter-GPU throughput/3rd Gen NVIDIA NVLink
Cluster Storage & LAN
  • 8x Cluster capacity 256TB/16x100GE Interconnects
  • Multi-fabric 100G Ethernet/InfiniBand
Internet Access
  • Multi-homed Internet Access
  • 100Gbps Fabric

Available System Benchmarks

AMD EPYC 7763, NVIDIA A100-SXM-80GB
MXNet NVIDIA Release 22.04, PyTorch NVIDIA Release 22.04, TensorFlow NVIDIA Release 22.04

Image Classification
  • Imagenet/ResNet
  • 28.11 Minutes
Object Detection (Heavy Weight)
  • COCO/Mask R-CNN
  • 43.787 Minutes
Natural Language Processing (NLP)
  • Wikipedia/Bert[1]
  • 19.828 Minutes
Speech Recognition
  • LibriSpeech/RNN-T
  • 31.291 Minutes