Extension
rmath.nn
Neural network building blocks: activation functions, loss functions, normalization layers, and pooling. All operate on rmath.Array.
Activation Functions
| Function | Description |
relu(a) | Rectified linear unit: max(0, x) |
leaky_relu(a, alpha) | Leaky ReLU with slope alpha for negative inputs |
sigmoid(a) | Logistic sigmoid: 1 / (1 + e⁻ˣ) |
gelu(a) | Gaussian error linear unit |
softmax(a) | Row-wise softmax (numerically stable) |
hardswish(a) | Hard-swish activation |
Loss Functions
| Function | Returns | Description |
mse_loss(a, target) | float | Mean squared error |
cross_entropy_loss(a, labels) | float | Cross-entropy loss. Labels are integer class indices. |
Normalization & Regularization
| Function | Description |
batch_norm(a, mu, sigma, gamma, beta) | Batch normalization. mu, sigma, gamma, beta are Vectors. |
layer_norm(a, eps) | Layer normalization with epsilon for numerical stability. |
dropout(a, p) | Random dropout with probability p. |
Pooling
max_pool2d(a, kernel_size) | Array | 2D max pooling |
avg_pool2d(a, kernel_size) | Array | 2D average pooling |