S Slento Systems Docs

Slento Systems Documentation

Welcome to the Slento Systems documentation. Here you'll find everything you need to set up, configure, and manage your infrastructure optimization mesh and pharma discovery platform.

New to Slento Systems? Start with the Quick Start guide to get your first node running in under 5 minutes.

Products

Mesh Optimizer

Distributed hardware optimization for GPU, CPU, FPGA, and memory subsystems. JEPA-driven job routing with automatic performance tuning.

Learn more →

Pharma Discovery

AI-powered drug discovery platform with multi-property landscape optimization, virtual screening, and ADMET prediction.

Learn more →

Quick Start

Get a mesh agent running in 3 steps:

1

Create an Account

Sign up at portal.slentosystems.com and get your license key.

2

Install the Agent

Run the install script on each machine you want to optimize:

curl -fsSL https://mesh.slentosystems.com/install.sh | bash
3

Configure & Start

Edit the config file with your license key and controller address:

# ~/.mesh-optimizer/config.yaml
license_key: MESH-XXXX-XXXX-XXXX-XXXX
controller_url: http://your-controller:8401
node_name: my-workstation

Then start the agent:

mesh-optimizer start

Installation

System Requirements

ComponentMinimumRecommended
Python3.9+3.11+
OSLinux, macOS, WindowsLinux (Ubuntu 22.04+)
RAM2 GB8 GB+
Disk500 MB2 GB+

Automatic Installation

The recommended way to install on Linux and macOS:

curl -fsSL https://mesh.slentosystems.com/install.sh | bash

This will:

  • Detect your OS and package manager
  • Install Python 3.9+ if not present
  • Create a virtual environment at ~/.mesh-optimizer/
  • Install all dependencies
  • Generate a default config file
  • Optionally create a systemd/launchd service

Manual Installation

python3 -m venv ~/.mesh-optimizer/venv
source ~/.mesh-optimizer/venv/bin/activate
pip install mesh-optimizer

GPU Support

GPU drivers required. Ensure you have the appropriate GPU drivers installed before running the agent.
GPURequirements
AMD (RDNA/CDNA)ROCm 5.7+ or amdgpu driver
NVIDIACUDA 11.0+ and nvidia-smi
InteloneAPI or i915 driver

Configuration

The agent reads configuration from ~/.mesh-optimizer/config.yaml.

# Node identity
node_name: my-workstation
license_key: MESH-XXXX-XXXX-XXXX-XXXX

# Controller connection
controller_url: http://controller-ip:8401
heartbeat_interval: 10  # seconds

# Networking
nat_mode: false          # Set true if behind NAT
public_url: ""           # Your public URL if using NAT
nat_poll_interval: 5     # seconds between job polls in NAT mode

# Agent API (disabled in NAT mode)
agent_port: 8400

# Probing
probe_interval: 21600    # seconds (6 hours)
probe_types:             # which probes to run
  - bandwidth
  - compute
  - latency
  - memory

# Resources
max_concurrent_jobs: 2
gpu_devices: auto        # "auto", "all", or list of indices

Environment Variables

All config options can also be set via environment variables with the MESH_ prefix:

export MESH_LICENSE_KEY=MESH-XXXX-XXXX-XXXX-XXXX
export MESH_CONTROLLER_URL=http://controller:8401
export MESH_NAT_MODE=true

How Mesh Optimizer Works

The Mesh Optimizer is a distributed system with three components:

Agent

Runs on each machine. Discovers hardware, reports health metrics, runs optimization probes, and executes routed jobs.

Controller

Central coordinator. Aggregates health data, manages the federated atlas, trains the JEPA model, and routes jobs to optimal nodes.

Dashboard

Web UI for monitoring node health, viewing atlas data, managing jobs, and controlling JEPA retraining.

Architecture

┌──────────────┐     ┌──────────────┐     ┌──────────────┐
│   Agent #1   │     │   Agent #2   │     │   Agent #3   │
│  (7900 XTX)  │     │  (RTX 4090)  │     │  (Xeon CPU)  │
└──────┬───────┘     └──────┬───────┘     └──────┬───────┘
       │ heartbeat          │ heartbeat          │ heartbeat
       │ atlas sync         │ atlas sync         │ atlas sync
       └────────────────────┼────────────────────┘
                            │
                    ┌───────▼───────┐
                    │  Controller   │
                    │  (JEPA Model) │
                    │  (Job Router) │
                    │  (Dashboard)  │
                    └───────────────┘

Data Flow

  1. Agents boot, scan hardware, and register with the controller
  2. Every 10 seconds: heartbeat with CPU, memory, GPU, and disk metrics
  3. Every 6 hours: run optimization probes and sync results to the controller
  4. Controller feeds probe data to the JEPA model for online learning
  5. When a job is submitted, the JEPA model predicts the best node
  6. Job is routed, executed, and results returned

Agent Setup

Each machine in your mesh runs an agent. The agent handles:

  • Hardware auto-discovery (CPUs, GPUs, FPGAs, memory)
  • Health reporting (CPU/GPU utilization, memory, temperatures)
  • Performance probing (bandwidth, compute, latency benchmarks)
  • Job execution (runs workloads routed by the controller)

Starting the Agent

# Start in foreground
mesh-optimizer start

# Start as a service (Linux)
sudo systemctl enable mesh-optimizer
sudo systemctl start mesh-optimizer

# Check status
mesh-optimizer status

Hardware Detection

The agent automatically detects:

HardwareDetection Method
AMD GPUsrocm-smi, /sys/class/drm
NVIDIA GPUsnvidia-smi
CPUs/proc/cpuinfo, psutil
Memorypsutil, dmidecode
FPGAslspci (Xilinx, Intel)

Controller

The controller is the brain of the mesh. It runs on one machine (typically the most capable) and coordinates all agents.

Starting the Controller

# Start controller (also starts a local agent)
mesh-optimizer controller start

# The controller runs three services:
#   Port 8400 - Local agent API
#   Port 8401 - Controller API (agents connect here)
#   Port 8402 - Dashboard (web UI)

JEPA Model

The controller trains a Joint Embedding Predictive Architecture (JEPA) model that learns hardware performance characteristics across your fleet. It uses this model to:

  • Predict which node will perform best for a given workload
  • Detect performance anomalies
  • Recommend optimal configurations

The model trains automatically from probe data. You can trigger manual retraining from the dashboard.

JEPA Mode

Each node can be configured to run the JEPA optimization engine in one of three modes. Set jepa_mode in your node config:

ModeWhere JEPA RunsCompute CostBest For
centralized Controller only ~0% on agents Heterogeneous clusters where cross-node job routing matters. The controller has the global view of all hardware.
local Each node independently ~1% CPU, ~200MB RAM per node Edge deployments, disconnected nodes, or when you want sub-millisecond local optimization decisions without controller latency.
off Nowhere 0% Low-power/embedded devices, or nodes where you only want monitoring and probes with no ML overhead.
# In mesh_config.yaml
node:
  jepa_mode: "centralized"   # or "local" or "off"
Centralized vs Local tradeoff: The controller's global JEPA sees probe data from ALL nodes (potentially millions of data points), giving it better cross-cluster predictions. A local JEPA only sees its own node's data — great for self-optimization, but it can't make cross-node routing decisions. You can mix modes: run "local" on GPU-heavy workstations and "centralized" on commodity servers.
Compute impact: The JEPA model is tiny (249K parameters). Even in "local" mode, inference takes <1ms, feedback takes <1ms, and a full retrain on 80K+ data points takes ~50 seconds on a single CPU core. It will not noticeably impact your workloads.

Failover

If the controller goes down, agents elect a new controller automatically:

  • Agents detect missing heartbeat after 2 minutes
  • Leader election based on: load, GPU capability, and uptime
  • New controller announces to all peers
  • Original controller yields when it comes back online

Community Atlas Sharing

Mesh Optimizer can share anonymized performance data with the Slento community hub to improve optimization for all users. This is enabled by default and can be disabled at any time.

How It Works

  1. Your agent runs hardware probes and collects performance data (kernel throughput, optimal block sizes, memory bandwidth, etc.)
  2. Every 6 hours, the data is anonymized — hostnames, IPs, file paths, and job commands are stripped
  3. Only the hardware class (e.g., “RDNA3_24GB”, “Xeon_16C”) and performance metrics are sent to the community hub
  4. The hub aggregates data from all contributors to train a global optimization model
  5. Your controller periodically pulls model improvements back, enriching your local atlas with insights from hardware you don’t have

What Is Shared

Shared NOT Shared
Hardware class (e.g., “RDNA3_24GB”) Exact GPU/CPU model names
Kernel performance numbers Hostnames or IP addresses
Optimal parameter values File paths or environment variables
Performance invariant boundaries Job commands or workload descriptions
Anonymous cluster ID (hash) MAC addresses or hardware serial numbers

Configuration

# In mesh_config.yaml
node:
  # Set to false to opt out of community sharing
  share_atlas_data: true   # default: enabled

Your mesh works identically either way. Disabling community sharing only means your data won’t contribute to the global model, and you won’t receive community model improvements. All local optimization features remain fully functional.

API Endpoints

  • GET /community/stats — View your community sync status (last push/pull times, points contributed)
  • POST /community/ingest — Receives aggregated community data from the hub (called by the hub, not by users)
  • GET /nodes/{node_id}/atlas-export — Export anonymized atlas data for a node

Networking

LAN Mode (Default)

All nodes on the same network. Controller connects directly to agent APIs.

# Agent config
controller_url: http://192.168.1.100:8401
agent_port: 8400

NAT/WAN Mode

For nodes behind firewalls or on different networks. Agents poll the controller for jobs instead of accepting inbound connections.

# Agent config
controller_url: https://mesh.yourcompany.com:8401
nat_mode: true
nat_poll_interval: 5
No inbound ports required. In NAT mode, agents only make outbound HTTPS connections. No firewall changes needed.

Hybrid Mode

Mix LAN and WAN nodes in the same mesh. LAN nodes use direct connections; WAN nodes use NAT mode.

Ports

PortServiceRequired
8400Agent APILAN mode only
8401Controller APIYes (inbound on controller)
8402DashboardOptional

Dashboard

The web dashboard is available at http://controller:8402 and provides:

  • Node Grid — Live health status of all nodes with CPU, GPU, memory gauges
  • Node Detail — Per-node performance history, hardware inventory, probe results
  • Jobs — Job queue, history, execution times, and routing decisions
  • Atlas — Federated atlas data across all nodes
  • JEPA — Model confidence, training history, and manual retrain

Pharma Discovery

The Pharma Discovery platform uses AI to accelerate drug discovery workflows. It combines multi-property landscape exploration with virtual screening against 15,000+ ChEMBL targets.

Key Capabilities

  • SSI Landscape Exploration — Navigate chemical space using learned property landscapes
  • Virtual Screening — Screen compound libraries against validated targets
  • ADMET Prediction — Predict absorption, distribution, metabolism, excretion, and toxicity
  • Molecular Scoring — Multi-property optimization with configurable objectives
  • Computational Chemistry — Binding affinity estimation and conformational analysis

Exploration Runs

An exploration run navigates chemical space to find novel compounds matching your target profile. Each run:

  1. Defines a target property landscape (e.g., high binding + low toxicity)
  2. Generates and evaluates candidate molecules using the SSI swarm
  3. Returns a ranked list of promising compounds with predicted properties

Exploration credits are consumed per run. See Tiers & Pricing for credit allocations.

Virtual Screening

Screen your compound library against validated target models. Each screening credit covers one compound evaluation.

Supported target databases:

  • ChEMBL (15,000+ targets)
  • Custom target models (Enterprise tier)

Pharma API Reference

Check Usage & Credits

GET /api/pharma-usage.php?key=PHRM-XXXX-XXXX-XXXX-XXXX

Response:
{
  "license_key": "PHRM-XXXX-XXXX-XXXX-XXXX",
  "tier": "base",
  "credits": {
    "exploration": {"total": 5, "used": 2, "remaining": 3},
    "screening": {"total": 55000, "used": 12300, "remaining": 42700}
  },
  "usage": {
    "score": {"used": 1200, "limit": 5000},
    "screen": {"used": 0, "limit": 1}
  }
}

Record Usage

POST /api/pharma-usage.php
Content-Type: application/json

{
  "key": "PHRM-XXXX-XXXX-XXXX-XXXX",
  "action": "score",
  "count": 100
}

Response:
{
  "status": "recorded",
  "remaining": {"score": 3800}
}

Tiers & Pricing

Mesh Optimizer

TierPriceNodesFeatures
CommunityFreeUnlimitedDashboard, job routing, AMD GPU optimization (one-time)
Professional$29/node/month
$19/node/month (annual)
As purchasedEverything in Community + continuous optimization for all hardware
EnterpriseCustomUnlimitedSLA, dedicated support, custom integrations

Pharma Discovery

TierPriceScoringScreeningExploration
EvaluationFree (30 days)5,000/mo1 run/mo2 credits
Base$1,500/mo5,000/mo1 run/moBuy packs
Enterprise$25,000/moUnlimitedUnlimitedBuy packs

Credit Packs (Add-On)

PackCreditsPrice
Exploration (5)5 exploration runs$5,000
Exploration (20)20 exploration runs$15,000
Screening (50K)50,000 compounds$2,500
Screening (500K)500,000 compounds$15,000
Campaign (1 target)1 exploration + 100K screening$25,000

License Activation

After purchasing, your license key appears in the portal. Add it to your config:

# Mesh Optimizer
license_key: MESH-XXXX-XXXX-XXXX-XXXX

# Pharma Discovery
license_key: PHRM-XXXX-XXXX-XXXX-XXXX

The agent validates the key on startup and periodically during operation.

License Validation API

POST /api/validate.php
Content-Type: application/json

{
  "license_key": "MESH-XXXX-XXXX-XXXX-XXXX",
  "hardware_id": "optional-machine-fingerprint"
}

Response:
{
  "valid": true,
  "tier": "professional",
  "max_nodes": 25,
  "expires_at": "2027-03-08T22:39:01Z",
  "product": "mesh_optimizer"
}

REST API Reference

Agent API (Port 8400)

MethodEndpointDescription
GET/healthQuick health check
GET/hardwareFull hardware inventory
POST/probe/runTrigger probes
POST/jobs/submitExecute a routed job
GET/jobs/{id}Job status

Controller API (Port 8401)

MethodEndpointDescription
POST/nodes/registerRegister a new node
POST/nodes/{id}/heartbeatSubmit health update
GET/nodesList all nodes
POST/atlas/syncUpload probe data
POST/jobs/submitSubmit job (auto-routed)
POST/jepa/retrainTrigger model retraining
GET/jepa/statsModel confidence stats

Troubleshooting

Agent won't connect to controller

  • Verify the controller_url is reachable: curl http://controller:8401/health
  • Check firewall rules — port 8401 must be open on the controller
  • If behind NAT, set nat_mode: true

GPU not detected

  • AMD: Ensure rocm-smi or /sys/class/drm/card*/device/vendor is accessible
  • NVIDIA: Ensure nvidia-smi is in PATH
  • Check that the agent user has permission to access GPU devices

License validation fails

  • Verify the key in your config matches what's shown in the portal
  • Check that your license hasn't expired
  • Ensure the agent can reach portal.slentosystems.com on port 443

High latency on heartbeats

  • Increase heartbeat_interval if on a slow connection
  • Check network latency with ping controller-ip

Changelog

v1.0.0 — March 2026

  • Initial release
  • Mesh Optimizer with JEPA-driven job routing
  • Pharma Discovery with SSI exploration and virtual screening
  • Support for AMD (RDNA/CDNA), NVIDIA, Intel GPUs
  • NAT/WAN mode for distributed deployments
  • Web dashboard with live monitoring
  • Automatic failover with leader election