๐ Scripts¶
The OpenHAIV framework provides a comprehensive set of shell scripts that help users quickly execute various experiments. These scripts are organized by task type and provide a convenient way to run experiments with pre-configured settings. This document explains the script directory structure, naming conventions, and how to use and customize these scripts.
Script Directory Structure¶
The scripts are organized in the following directory structure:
Text Only | |
---|---|
Naming Conventions¶
Scripts follow a consistent naming pattern that indicates:
- The task type (e.g.,
sl_
,inc_
,det_
) - The dataset (e.g.,
oes_
,cifar10_
,cub_
) - The model architecture (e.g.,
rn18
,coop-b16
) - The algorithm or method (e.g.,
msp
,lwf
,savc
)
Example: sl_oes_coop-b16.sh
indicates a Supervised Learning task on the OES dataset using CoOp-CLIP-B/16 model.
Script Types¶
Supervised Learning Scripts (sl/
)¶
These scripts train models in a standard supervised learning setting. They typically use the PreTrainer
trainer and standard classification losses.
Example:
Bash | |
---|---|
Common supervised learning scripts include:
sl_oes_rn18.sh
: Training a ResNet18 model on OES datasetsl_cifar10_rn18.sh
: Training a ResNet18 model on CIFAR10 datasetsl_oes_coop-b16.sh
: Training a CoOp-CLIP-B/16 model on OES dataset
Incremental Learning Scripts (inc/
)¶
These scripts train models in an incremental learning setting, where new classes are introduced over time. There are two main subtypes:
Class-incremental Learning¶
Example:
Bash | |
---|---|
Few-shot Class-incremental Learning¶
Example:
Bash | |
---|---|
Common incremental learning scripts include:
inc_cub_lwf.sh
: Learning without Forgetting on CUB200 datasetinc_cub_icarl.sh
: iCaRL method on CUB200 datasetinc_BM200_savc.sh
: SAVC method on BM200 dataset (few-shot setting)
Out-of-Distribution Detection Scripts (ood/
)¶
These scripts train and evaluate models for out-of-distribution detection, identifying samples that don't belong to the training distribution.
Example:
Bash | |
---|---|
Common OOD detection scripts include:
det_oes_rn50_msp.sh
: Maximum Softmax Probability method with ResNet50 on OES datasetdet_oes_rn50_mls.sh
: Maximum Logit Score method with ResNet50 on OES datasetdet_oes_clip-b16_glmcm.sh
: GL-MCM method with CLIP-B/16 on OES dataset
Batch Processing Scripts¶
Some scripts are designed to run multiple related experiments in sequence:
Example:
Customizing Scripts¶
You can customize existing scripts or create new ones by following these steps:
- Copy an Existing Script:
Bash | |
---|---|
- Modify the Configuration Path:
Bash | |
---|---|
- Update the Script Comments to reflect your experiment:
Running Scripts¶
To run a script, simply execute it with bash:
Bash | |
---|---|
You can also override configuration parameters directly from the command line:
Bash | |
---|---|
Advanced Usage¶
GPU Selection¶
Most scripts allow specifying the GPU device using the device
parameter:
Bash | |
---|---|
To use a different GPU, change cuda:0
to the desired GPU index (e.g., cuda:1
).
Multi-stage Training¶
Some scripts implement multi-stage training, where different phases (e.g., training, testing) are executed sequentially:
Bash | |
---|---|
Running on Multiple GPUs¶
For multi-GPU training, you can use the CUDA_VISIBLE_DEVICES
environment variable: