Welcome to the Ember ML documentation. Ember ML is a hardware-optimized neural network library that supports multiple backends (PyTorch, MLX, NumPy) to run efficiently on different hardware platforms (CUDA, Apple Metal, and other platforms).
- API Reference: Detailed API documentation for all modules
- Frontend Usage Guide: Comprehensive guide on using the Ember ML frontend
- Tensor Architecture: Detailed explanation of the tensor operations architecture
- Architecture: System architecture and design principles
- Ember ML Architecture: Comprehensive overview of the Ember ML architecture
- Function-First Design: Detailed explanation of the function-first design pattern
- Tutorials: Step-by-step guides for common tasks
- Examples: Code examples and use cases
- Plans: Development plans and roadmaps
pip install ember-ml
import ember_ml
from ember_ml.nn.tensor import EmberTensor
from ember_ml import ops
# Set the backend
ember_ml.backend.set_backend('mlx') # or 'torch' or 'numpy'
# Create a tensor
tensor = EmberTensor([[1, 2, 3], [4, 5, 6]])
# Perform operations
result = ops.matmul(tensor, EmberTensor([[1], [2], [3]]))
print(result) # EmberTensor([[14], [32]])
For more detailed instructions, see the Getting Started guide and the Frontend Usage Guide.
- Hardware-Optimized Neural Networks: Implementation of cutting-edge neural network architectures optimized for different hardware platforms
- Multi-Backend Support: Backend-agnostic tensor operations that work with PyTorch, MLX, NumPy, and other computational backends
- Function-First Design: Efficient memory usage through separation of functions from class implementations
- Liquid Neural Networks: Design and implementation of liquid neural networks and other advanced architectures
- Neural Circuit Policies: Biologically-inspired neural architectures with custom wiring configurations
The project implements various cutting-edge neural network architectures:
- Liquid Neural Networks (LNN): Dynamic networks with adaptive connectivity
- Neural Circuit Policies (NCP): Biologically-inspired neural architectures
- Stride-Aware Continuous-time Fully Connected (CfC) networks
- Specialized attention mechanisms and temporal processing units
For more details, see the Architecture Documentation.
The project implements backend-agnostic tensor operations that can use different computational backends:
- MLX (optimized for Apple Silicon)
- PyTorch (for CUDA and other GPU platforms)
- NumPy (for CPU computation)
- Future support for additional backends
The project includes tools for extracting features from large datasets, including:
TerabyteFeatureExtractor
: Extracts features from large datasetsTemporalStrideProcessor
: Processes temporal data with variable strides
If you encounter any issues or have questions:
- Check the tutorials and examples in this documentation
- Search for similar issues in the GitHub repository
- Ask a question in the Discussion forum
Ember ML is released under the MIT License.