Welcome to fastreid’s documentation!¶
API Documentation¶
fastreid.checkpoint¶
fastreid.config¶
Related tutorials: ../tutorials/configs, ../tutorials/extend.
@author: l1aoxingyu @contact: sherlockliao01@gmail.com
Config References¶
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 | # Convention about Training / Test specific parameters
# -----------------------------------------------------------------------------
# Whenever an argument can be either used for training or for testing, the
# corresponding name will be post-fixed by a _TRAIN for a training parameter,
# or _TEST for a test-specific parameter.
# For example, the number of images during training will be
# IMAGES_PER_BATCH_TRAIN, while the number of images for testing will be
# IMAGES_PER_BATCH_TEST
# -----------------------------------------------------------------------------
# Config definition
# -----------------------------------------------------------------------------
_C = CN()
# -----------------------------------------------------------------------------
# MODEL
# -----------------------------------------------------------------------------
_C.MODEL = CN()
_C.MODEL.DEVICE = "cuda"
_C.MODEL.META_ARCHITECTURE = "Baseline"
_C.MODEL.FREEZE_LAYERS = ['']
# MoCo memory size
_C.MODEL.QUEUE_SIZE = 8192
# ---------------------------------------------------------------------------- #
# Backbone options
# ---------------------------------------------------------------------------- #
_C.MODEL.BACKBONE = CN()
_C.MODEL.BACKBONE.NAME = "build_resnet_backbone"
_C.MODEL.BACKBONE.DEPTH = "50x"
_C.MODEL.BACKBONE.LAST_STRIDE = 1
# Backbone feature dimension
_C.MODEL.BACKBONE.FEAT_DIM = 2048
# Normalization method for the convolution layers.
_C.MODEL.BACKBONE.NORM = "BN"
# If use IBN block in backbone
_C.MODEL.BACKBONE.WITH_IBN = False
# If use SE block in backbone
_C.MODEL.BACKBONE.WITH_SE = False
# If use Non-local block in backbone
_C.MODEL.BACKBONE.WITH_NL = False
# If use ImageNet pretrain model
_C.MODEL.BACKBONE.PRETRAIN = True
# Pretrain model path
_C.MODEL.BACKBONE.PRETRAIN_PATH = ''
# ---------------------------------------------------------------------------- #
# REID HEADS options
# ---------------------------------------------------------------------------- #
_C.MODEL.HEADS = CN()
_C.MODEL.HEADS.NAME = "EmbeddingHead"
# Normalization method for the convolution layers.
_C.MODEL.HEADS.NORM = "BN"
# Number of identity
_C.MODEL.HEADS.NUM_CLASSES = 0
# Embedding dimension in head
_C.MODEL.HEADS.EMBEDDING_DIM = 0
# If use BNneck in embedding
_C.MODEL.HEADS.WITH_BNNECK = True
# Triplet feature using feature before(after) bnneck
_C.MODEL.HEADS.NECK_FEAT = "before" # options: before, after
# Pooling layer type
_C.MODEL.HEADS.POOL_LAYER = "avgpool"
# Classification layer type
_C.MODEL.HEADS.CLS_LAYER = "linear" # "arcSoftmax" or "circleSoftmax"
# Margin and Scale for margin-based classification layer
_C.MODEL.HEADS.MARGIN = 0.15
_C.MODEL.HEADS.SCALE = 128
# ---------------------------------------------------------------------------- #
# REID LOSSES options
# ---------------------------------------------------------------------------- #
_C.MODEL.LOSSES = CN()
_C.MODEL.LOSSES.NAME = ("CrossEntropyLoss",)
# Cross Entropy Loss options
_C.MODEL.LOSSES.CE = CN()
# if epsilon == 0, it means no label smooth regularization,
# if epsilon == -1, it means adaptive label smooth regularization
_C.MODEL.LOSSES.CE.EPSILON = 0.0
_C.MODEL.LOSSES.CE.ALPHA = 0.2
_C.MODEL.LOSSES.CE.SCALE = 1.0
# Focal Loss options
_C.MODEL.LOSSES.FL = CN()
_C.MODEL.LOSSES.FL.ALPHA = 0.25
_C.MODEL.LOSSES.FL.GAMMA = 2
_C.MODEL.LOSSES.FL.SCALE = 1.0
# Triplet Loss options
_C.MODEL.LOSSES.TRI = CN()
_C.MODEL.LOSSES.TRI.MARGIN = 0.3
_C.MODEL.LOSSES.TRI.NORM_FEAT = False
_C.MODEL.LOSSES.TRI.HARD_MINING = True
_C.MODEL.LOSSES.TRI.SCALE = 1.0
# Circle Loss options
_C.MODEL.LOSSES.CIRCLE = CN()
_C.MODEL.LOSSES.CIRCLE.MARGIN = 0.25
_C.MODEL.LOSSES.CIRCLE.GAMMA = 128
_C.MODEL.LOSSES.CIRCLE.SCALE = 1.0
# Cosface Loss options
_C.MODEL.LOSSES.COSFACE = CN()
_C.MODEL.LOSSES.COSFACE.MARGIN = 0.25
_C.MODEL.LOSSES.COSFACE.GAMMA = 128
_C.MODEL.LOSSES.COSFACE.SCALE = 1.0
# Path to a checkpoint file to be loaded to the model. You can find available models in the model zoo.
_C.MODEL.WEIGHTS = ""
# Values to be used for image normalization
_C.MODEL.PIXEL_MEAN = [0.485*255, 0.456*255, 0.406*255]
# Values to be used for image normalization
_C.MODEL.PIXEL_STD = [0.229*255, 0.224*255, 0.225*255]
# -----------------------------------------------------------------------------
# KNOWLEDGE DISTILLATION
# -----------------------------------------------------------------------------
_C.KD = CN()
_C.KD.MODEL_CONFIG = ""
_C.KD.MODEL_WEIGHTS = ""
# -----------------------------------------------------------------------------
# INPUT
# -----------------------------------------------------------------------------
_C.INPUT = CN()
# Size of the image during training
_C.INPUT.SIZE_TRAIN = [256, 128]
# Size of the image during test
_C.INPUT.SIZE_TEST = [256, 128]
# Random probability for image horizontal flip
_C.INPUT.DO_FLIP = True
_C.INPUT.FLIP_PROB = 0.5
# Value of padding size
_C.INPUT.DO_PAD = True
_C.INPUT.PADDING_MODE = 'constant'
_C.INPUT.PADDING = 10
# Random color jitter
_C.INPUT.CJ = CN()
_C.INPUT.CJ.ENABLED = False
_C.INPUT.CJ.PROB = 0.5
_C.INPUT.CJ.BRIGHTNESS = 0.15
_C.INPUT.CJ.CONTRAST = 0.15
_C.INPUT.CJ.SATURATION = 0.1
_C.INPUT.CJ.HUE = 0.1
# Random Affine
_C.INPUT.DO_AFFINE = False
# Auto augmentation
_C.INPUT.DO_AUTOAUG = False
_C.INPUT.AUTOAUG_PROB = 0.0
# Augmix augmentation
_C.INPUT.DO_AUGMIX = False
_C.INPUT.AUGMIX_PROB = 0.0
# Random Erasing
_C.INPUT.REA = CN()
_C.INPUT.REA.ENABLED = False
_C.INPUT.REA.PROB = 0.5
_C.INPUT.REA.VALUE = [0.485*255, 0.456*255, 0.406*255]
# Random Patch
_C.INPUT.RPT = CN()
_C.INPUT.RPT.ENABLED = False
_C.INPUT.RPT.PROB = 0.5
# -----------------------------------------------------------------------------
# Dataset
# -----------------------------------------------------------------------------
_C.DATASETS = CN()
# List of the dataset names for training
_C.DATASETS.NAMES = ("Market1501",)
# List of the dataset names for testing
_C.DATASETS.TESTS = ("Market1501",)
# Combine trainset and testset joint training
_C.DATASETS.COMBINEALL = False
# -----------------------------------------------------------------------------
# DataLoader
# -----------------------------------------------------------------------------
_C.DATALOADER = CN()
# P/K Sampler for data loading
_C.DATALOADER.PK_SAMPLER = True
# Naive sampler which don't consider balanced identity sampling
_C.DATALOADER.NAIVE_WAY = True
# Number of instance for each person
_C.DATALOADER.NUM_INSTANCE = 4
_C.DATALOADER.NUM_WORKERS = 8
# ---------------------------------------------------------------------------- #
# Solver
# ---------------------------------------------------------------------------- #
_C.SOLVER = CN()
# AUTOMATIC MIXED PRECISION
_C.SOLVER.FP16_ENABLED = False
# Optimizer
_C.SOLVER.OPT = "Adam"
_C.SOLVER.MAX_EPOCH = 120
_C.SOLVER.BASE_LR = 3e-4
_C.SOLVER.BIAS_LR_FACTOR = 1.
_C.SOLVER.HEADS_LR_FACTOR = 1.
_C.SOLVER.MOMENTUM = 0.9
_C.SOLVER.NESTEROV = True
_C.SOLVER.WEIGHT_DECAY = 0.0005
_C.SOLVER.WEIGHT_DECAY_BIAS = 0.
# Multi-step learning rate options
_C.SOLVER.SCHED = "MultiStepLR"
_C.SOLVER.DELAY_EPOCHS = 0
_C.SOLVER.GAMMA = 0.1
_C.SOLVER.STEPS = [30, 55]
# Cosine annealing learning rate options
_C.SOLVER.ETA_MIN_LR = 1e-7
# Warmup options
_C.SOLVER.WARMUP_FACTOR = 0.1
_C.SOLVER.WARMUP_ITERS = 1000
_C.SOLVER.WARMUP_METHOD = "linear"
# Backbone freeze iters
_C.SOLVER.FREEZE_ITERS = 0
# FC freeze iters
_C.SOLVER.FREEZE_FC_ITERS = 0
# SWA options
# _C.SOLVER.SWA = CN()
# _C.SOLVER.SWA.ENABLED = False
# _C.SOLVER.SWA.ITER = 10
# _C.SOLVER.SWA.PERIOD = 2
# _C.SOLVER.SWA.LR_FACTOR = 10.
# _C.SOLVER.SWA.ETA_MIN_LR = 3.5e-6
# _C.SOLVER.SWA.LR_SCHED = False
_C.SOLVER.CHECKPOINT_PERIOD = 20
# Number of images per batch across all machines.
# This is global, so if we have 8 GPUs and IMS_PER_BATCH = 16, each GPU will
# see 2 images per batch
_C.SOLVER.IMS_PER_BATCH = 64
# This is global, so if we have 8 GPUs and IMS_PER_BATCH = 16, each GPU will
# see 2 images per batch
_C.TEST = CN()
_C.TEST.EVAL_PERIOD = 20
# Number of images per batch in one process.
_C.TEST.IMS_PER_BATCH = 64
_C.TEST.METRIC = "cosine"
_C.TEST.ROC_ENABLED = False
_C.TEST.FLIP_ENABLED = False
# Average query expansion
_C.TEST.AQE = CN()
_C.TEST.AQE.ENABLED = False
_C.TEST.AQE.ALPHA = 3.0
_C.TEST.AQE.QE_TIME = 1
_C.TEST.AQE.QE_K = 5
# Re-rank
_C.TEST.RERANK = CN()
_C.TEST.RERANK.ENABLED = False
_C.TEST.RERANK.K1 = 20
_C.TEST.RERANK.K2 = 6
_C.TEST.RERANK.LAMBDA = 0.3
# Precise batchnorm
_C.TEST.PRECISE_BN = CN()
_C.TEST.PRECISE_BN.ENABLED = False
_C.TEST.PRECISE_BN.DATASET = 'Market1501'
_C.TEST.PRECISE_BN.NUM_ITER = 300
# ---------------------------------------------------------------------------- #
# Misc options
# ---------------------------------------------------------------------------- #
_C.OUTPUT_DIR = "logs/"
# Benchmark different cudnn algorithms.
# If input images have very different sizes, this option will have large overhead
# for about 10k iterations. It usually hurts total time, but can benefit for certain models.
# If input images have the same or similar sizes, benchmark is often helpful.
_C.CUDNN_BENCHMARK = False
|
fastreid.data¶
fastreid.data.data_utils module¶
fastreid.data.datasets module¶
fastreid.data.samplers module¶
fastreid.data.transforms module¶
fastreid.data.transforms¶
fastreid.evaluation¶
fastreid.layers¶
fastreid.modeling¶
Model Registries¶
These are different registries provided in modeling. Each registry provide you the ability to replace it with your customized component, without having to modify fastreid’s code.
Note that it is impossible to allow users to customize any line of code directly. Even just to add one line at some place, you’ll likely need to find out the smallest registry which contains that line, and register your component to that registry.
fastreid.solver¶
fastreid.utils¶
fastreid.utils.colormap module¶
fastreid.utils.comm module¶
This file contains primitives for multi-gpu communication. This is useful when doing distributed training.
-
fastreid.utils.comm.
get_local_rank
() → int[source]¶ - Returns
The rank of the current process within the local (per-machine) process group.
-
fastreid.utils.comm.
get_local_size
() → int[source]¶ - Returns
The size of the per-machine process group, i.e. the number of processes per machine.
-
fastreid.utils.comm.
synchronize
()[source]¶ Helper function to synchronize (barrier) among all processes when using distributed training
-
fastreid.utils.comm.
all_gather
(data, group=None)[source]¶ Run all_gather on arbitrary picklable data (not necessarily tensors). :param data: any picklable object :param group: a torch process group. By default, will use a group which
contains all ranks on gloo backend.
- Returns
list[data] – list of data gathered from each rank
-
fastreid.utils.comm.
gather
(data, dst=0, group=None)[source]¶ Run gather on arbitrary picklable data (not necessarily tensors). :param data: any picklable object :param dst: destination rank :type dst: int :param group: a torch process group. By default, will use a group which
contains all ranks on gloo backend.
- Returns
list[data] –
- on dst, a list of data gathered from each rank. Otherwise,
an empty list.
- Returns
int –
- a random number that is the same across all workers.
If workers need a shared RNG, they can use this shared seed to create one.
All workers must call this function, otherwise it will deadlock.
-
fastreid.utils.comm.
reduce_dict
(input_dict, average=True)[source]¶ Reduce the values in the dictionary from all processes so that process with rank 0 has the reduced results. :param input_dict: inputs to be reduced. All the values must be scalar CUDA Tensor. :type input_dict: dict :param average: whether to do average or sum :type average: bool
- Returns
a dict with the same keys as input_dict, after reduction.
fastreid.utils.events module¶
-
fastreid.utils.events.
get_event_storage
()[source]¶ - Returns
The
EventStorage
object that’s currently being used. Throws an error if noEventStorage
is currently enabled.
-
class
fastreid.utils.events.
JSONWriter
(json_file, window_size=20)[source]¶ Bases:
fastreid.utils.events.EventWriter
Write scalars to a json file. It saves scalars as one json per line (instead of a big json) for easy parsing. Examples parsing such a json file:
$ cat metrics.json | jq -s '.[0:2]' [ { "data_time": 0.008433341979980469, "iteration": 19, "loss": 1.9228371381759644, "loss_box_reg": 0.050025828182697296, "loss_classifier": 0.5316952466964722, "loss_mask": 0.7236229181289673, "loss_rpn_box": 0.0856662318110466, "loss_rpn_cls": 0.48198649287223816, "lr": 0.007173333333333333, "time": 0.25401854515075684 }, { "data_time": 0.007216215133666992, "iteration": 39, "loss": 1.282649278640747, "loss_box_reg": 0.06222952902317047, "loss_classifier": 0.30682939291000366, "loss_mask": 0.6970193982124329, "loss_rpn_box": 0.038663312792778015, "loss_rpn_cls": 0.1471673548221588, "lr": 0.007706666666666667, "time": 0.2490077018737793 } ] $ cat metrics.json | jq '.loss_mask' 0.7126231789588928 0.689423680305481 0.6776131987571716 ...
-
class
fastreid.utils.events.
TensorboardXWriter
(log_dir: str, window_size: int = 20, **kwargs)[source]¶ Bases:
fastreid.utils.events.EventWriter
Write all scalars to a tensorboard file.
-
class
fastreid.utils.events.
CommonMetricPrinter
(max_iter)[source]¶ Bases:
fastreid.utils.events.EventWriter
Print common metrics to the terminal, including iteration time, ETA, memory, all losses, and the learning rate. It also applies smoothing using a window of 20 elements. It’s meant to print common metrics in common ways. To print something in more customized ways, please implement a similar printer by yourself.
-
class
fastreid.utils.events.
EventStorage
(start_iter=0)[source]¶ Bases:
object
The user-facing class that provides metric storage functionalities. In the future we may add support for storing / logging other types of data if needed.
-
put_image
(img_name, img_tensor)[source]¶ Add an img_tensor associated with img_name, to be shown on tensorboard. :param img_name: The name of the image to put into tensorboard. :type img_name: str :param img_tensor: An uint8 or float
Tensor of shape [channel, height, width] where channel is 3. The image format should be RGB. The elements in img_tensor can either have values in [0, 1] (float32) or [0, 255] (uint8). The img_tensor will be visualized in tensorboard.
-
put_scalar
(name, value, smoothing_hint=True)[source]¶ Add a scalar value to the HistoryBuffer associated with name. :param smoothing_hint: a ‘hint’ on whether this scalar is noisy and should be
smoothed when logged. The hint will be accessible through
EventStorage.smoothing_hints()
. A writer may ignore the hint and apply custom smoothing rule. It defaults to True because most scalars we save need to be smoothed to provide any useful signal.
-
put_scalars
(*, smoothing_hint=True, **kwargs)[source]¶ Put multiple scalars from keyword arguments. .. rubric:: Examples
storage.put_scalars(loss=my_loss, accuracy=my_accuracy, smoothing_hint=True)
-
put_histogram
(hist_name, hist_tensor, bins=1000)[source]¶ Create a histogram from a tensor. :param hist_name: The name of the histogram to put into tensorboard. :type hist_name: str :param hist_tensor: A Tensor of arbitrary shape to be converted
into a histogram.
- Parameters
bins (int) – Number of histogram bins.
-
latest
()[source]¶ - Returns
dict[str -> (float, int)] –
- mapping from the name of each scalar to the most
recent value and the iteration number its added.
-
latest_with_smoothing_hint
(window_size=20)[source]¶ Similar to
latest()
, but the returned values are either the un-smoothed original latest value, or a median of the given window_size, depend on whether the smoothing_hint is True. This provides a default behavior that other writers can use.
-
smoothing_hints
()[source]¶ - Returns
dict[name -> bool] –
- the user-provided hint on whether the scalar
is noisy and needs smoothing.
-
step
()[source]¶ User should either: (1) Call this function to increment storage.iter when needed. Or (2) Set storage.iter to the correct iteration number before each iteration. The storage will then be able to associate the new data with an iteration number.
-
property
iter
¶ Returns: int: The current iteration number. When used together with a trainer,
this is ensured to be the same as trainer.iter.
-
property
iteration
¶
-
name_scope
(name)[source]¶ - Yields
A context within which all the events added to this storage will be prefixed by the name scope.
-
fastreid.utils.logger module¶
-
fastreid.utils.logger.
setup_logger
(output=None, distributed_rank=0, *, color=True, name='fastreid', abbrev_name=None)[source]¶ - Parameters
output (str) – a file name or a directory to save log. If None, will not save log file. If ends with “.txt” or “.log”, assumed to be a file name. Otherwise, logs will be saved to output/log.txt.
name (str) – the root module name of this logger
abbrev_name (str) – an abbreviation of the module, to avoid long names in logs. Set to “” to not log the root module in logs. By default, will abbreviate “detectron2” to “d2” and leave other modules unchanged.
-
fastreid.utils.logger.
log_first_n
(lvl, msg, n=1, *, name=None, key='caller')[source]¶ Log only for the first n times. :param lvl: the logging level :type lvl: int :param msg: :type msg: str :param n: :type n: int :param name: name of the logger to use. Will use the caller’s module by default. :type name: str :param key: the string(s) can be one of “caller” or
“message”, which defines how to identify duplicated logs. For example, if called with n=1, key=”caller”, this function will only log the first call from the same caller, regardless of the message content. If called with n=1, key=”message”, this function will log the same content only once, even if they are called from different places. If called with n=1, key=(“caller”, “message”), this function will not log only if the same caller has logged the same message before.
-
fastreid.utils.logger.
log_every_n
(lvl, msg, n=1, *, name=None)[source]¶ Log once per n times. :param lvl: the logging level :type lvl: int :param msg: :type msg: str :param n: :type n: int :param name: name of the logger to use. Will use the caller’s module by default. :type name: str
-
fastreid.utils.logger.
log_every_n_seconds
(lvl, msg, n=1, *, name=None)[source]¶ Log no more than once per n seconds. :param lvl: the logging level :type lvl: int :param msg: :type msg: str :param n: :type n: int :param name: name of the logger to use. Will use the caller’s module by default. :type name: str
fastreid.utils.registry module¶
-
class
fastreid.utils.registry.
Registry
(name: str)[source]¶ Bases:
object
The registry that provides name -> object mapping, to support third-party users’ custom modules. To create a registry (e.g. a backbone registry): .. code-block:: python
BACKBONE_REGISTRY = Registry(‘BACKBONE’)
To register an object: .. code-block:: python
@BACKBONE_REGISTRY.register() class MyBackbone():
…
Or: .. code-block:: python
BACKBONE_REGISTRY.register(MyBackbone)