Model Zoo
TODO this page has been generated by AI. needs checking before publishing.
OpenRetina provides a collection of pre-trained retinal models from published research. These models can be easily loaded and used for inference, analysis, or as starting points for further training.
Available Pre-trained Models
All models are automatically downloaded and cached when first used. They are hosted on Hugging Face.
Höfling et al., 2024 Models
Based on the paper "A chromatic feature detector in the retina signals visual context changes" (eLife 2024).
Model Name | Type | Input Shape | Description | Size |
---|---|---|---|---|
hoefling_2024_base_low_res |
Core-Readout | (2, T, 16, 18) | Low-resolution model trained on mouse retina calcium imaging data | ~50MB |
hoefling_2024_base_high_res |
Core-Readout | (2, T, 32, 36) | High-resolution version with larger spatial input | ~80MB |
Dataset: Mouse retina calcium imaging responses to natural scenes and artificial stimuli. TODO link
Usage Example:
from openretina.models import load_core_readout_from_remote
# Load low-resolution model
model = load_core_readout_from_remote("hoefling_2024_base_low_res", "cpu")
# Create sample input (batch_size=1, time_steps=50, height=16, width=18, channels=2)
stimulus = torch.rand(1, 50, 16, 18, 2)
responses = model(stimulus)
Karamanlis et al., 2024 Models
Based on the paper "Nonlinear receptive fields evoke redundant retinal coding of natural scenes" (Nature 2024).
Model Name | Type | Input Shape | Description | Size |
---|---|---|---|---|
karamanlis_2024_base |
Core-Readout | (1, T, H, W) | Base convolutional model for primate retina | ~60MB |
karamanlis_2024_gru |
GRU Core-Readout | (1, T, H, W) | Model with GRU temporal processing | ~70MB |
Dataset: Primate retina responses to natural scenes. TODO dataset link.
Usage Example:
# Load GRU-based model
model = load_core_readout_from_remote("karamanlis_2024_gru", "cuda")
# The exact input dimensions depend on the specific dataset preprocessing
stimulus_shape = model.stimulus_shape(time_steps=100)
stimulus = torch.rand(stimulus_shape)
responses = model(stimulus)
Maheswaranathan et al., 2023 Models
Based on the paper "Interpreting the retinal neural code for natural scenes: From computations to neurons" (Neuron 2023).
Model Name | Type | Input Shape | Description | Size |
---|---|---|---|---|
maheswaranathan_2023_base |
Core-Readout | (1, T, H, W) | Base model for salamander retina | ~45MB |
maheswaranathan_2023_gru |
GRU Core-Readout | (1, T, H, W) | Model with recurrent temporal dynamics | ~55MB |
Dataset: Salamander retina responses to natural movies. TODO dataset link.
Usage Example:
# Load base model
model = load_core_readout_from_remote("maheswaranathan_2023_base", "cpu")
# Get appropriate input shape
input_shape = model.stimulus_shape(time_steps=200, num_batches=4)
stimulus = torch.rand(input_shape)
responses = model(stimulus)
Loading and Using Models
Basic Loading
import torch
from openretina.models import load_core_readout_from_remote
# Load any available model
model = load_core_readout_from_remote("hoefling_2024_base_low_res", "cpu")
# Check model properties
print(f"Number of neurons: {model.readout.n_neurons}")
print(f"Input shape for 50 time steps: {model.stimulus_shape(time_steps=50)}")
Device Handling
# Load on GPU if available
device = "cuda" if torch.cuda.is_available() else "cpu"
model = load_core_readout_from_remote("karamanlis_2024_gru", device)
# Move existing model to different device
model = model.to("cuda")
Model Storage and Caching
Models are automatically cached in your local filesystem:
- Default cache location:
~/.cache/openretina/
- Custom cache location: Set via
cache_directory_path
parameter - Manual cache management: Use
openretina.utils.file_utils
functions
from openretina.utils.file_utils import get_cache_directory
# Check cache location
cache_dir = get_cache_directory()
print(f"Models cached in: {cache_dir}")
# Load with custom cache location
model = load_core_readout_from_remote(
"hoefling_2024_base_low_res",
"cpu",
cache_directory_path="/custom/path"
)
Troubleshooting
Common Issues
Model download fails: - Check internet connection - Verify cache directory permissions - Try different cache location
Out of memory errors: - Use CPU instead of GPU for inference - Reduce batch size or temporal length - Use lower resolution models
Input shape mismatches:
- Use model.stimulus_shape()
to get correct input dimensions
- Check channel ordering (some models expect specific color channels)
- Verify temporal length is appropriate
Getting Help
For model-specific issues: 1. Check the FAQ 2. Review original paper documentation 3. Open an issue on GitHub 4. Contact the model authors