fastreid.utils

fastreid.utils.colormap module

fastreid.utils.comm module

This file contains primitives for multi-gpu communication. This is useful when doing distributed training.

fastreid.utils.comm.get_world_size()int[source]
fastreid.utils.comm.get_rank()int[source]
fastreid.utils.comm.get_local_rank()int[source]
Returns

The rank of the current process within the local (per-machine) process group.

fastreid.utils.comm.get_local_size()int[source]
Returns

The size of the per-machine process group, i.e. the number of processes per machine.

fastreid.utils.comm.is_main_process()bool[source]
fastreid.utils.comm.synchronize()[source]

Helper function to synchronize (barrier) among all processes when using distributed training

fastreid.utils.comm.all_gather(data, group=None)[source]

Run all_gather on arbitrary picklable data (not necessarily tensors). :param data: any picklable object :param group: a torch process group. By default, will use a group which

contains all ranks on gloo backend.

Returns

list[data] – list of data gathered from each rank

fastreid.utils.comm.gather(data, dst=0, group=None)[source]

Run gather on arbitrary picklable data (not necessarily tensors). :param data: any picklable object :param dst: destination rank :type dst: int :param group: a torch process group. By default, will use a group which

contains all ranks on gloo backend.

Returns

list[data]

on dst, a list of data gathered from each rank. Otherwise,

an empty list.

fastreid.utils.comm.shared_random_seed()[source]
Returns

int

a random number that is the same across all workers.

If workers need a shared RNG, they can use this shared seed to create one.

All workers must call this function, otherwise it will deadlock.

fastreid.utils.comm.reduce_dict(input_dict, average=True)[source]

Reduce the values in the dictionary from all processes so that process with rank 0 has the reduced results. :param input_dict: inputs to be reduced. All the values must be scalar CUDA Tensor. :type input_dict: dict :param average: whether to do average or sum :type average: bool

Returns

a dict with the same keys as input_dict, after reduction.

fastreid.utils.events module

fastreid.utils.events.get_event_storage()[source]
Returns

The EventStorage object that’s currently being used. Throws an error if no EventStorage is currently enabled.

class fastreid.utils.events.JSONWriter(json_file, window_size=20)[source]

Bases: fastreid.utils.events.EventWriter

Write scalars to a json file. It saves scalars as one json per line (instead of a big json) for easy parsing. Examples parsing such a json file:

$ cat metrics.json | jq -s '.[0:2]'
[
  {
    "data_time": 0.008433341979980469,
    "iteration": 19,
    "loss": 1.9228371381759644,
    "loss_box_reg": 0.050025828182697296,
    "loss_classifier": 0.5316952466964722,
    "loss_mask": 0.7236229181289673,
    "loss_rpn_box": 0.0856662318110466,
    "loss_rpn_cls": 0.48198649287223816,
    "lr": 0.007173333333333333,
    "time": 0.25401854515075684
  },
  {
    "data_time": 0.007216215133666992,
    "iteration": 39,
    "loss": 1.282649278640747,
    "loss_box_reg": 0.06222952902317047,
    "loss_classifier": 0.30682939291000366,
    "loss_mask": 0.6970193982124329,
    "loss_rpn_box": 0.038663312792778015,
    "loss_rpn_cls": 0.1471673548221588,
    "lr": 0.007706666666666667,
    "time": 0.2490077018737793
  }
]
$ cat metrics.json | jq '.loss_mask'
0.7126231789588928
0.689423680305481
0.6776131987571716
...
__init__(json_file, window_size=20)[source]
Parameters
  • json_file (str) – path to the json file. New data will be appended if the file exists.

  • window_size (int) – the window size of median smoothing for the scalars whose smoothing_hint are True.

write()[source]
close()[source]
class fastreid.utils.events.TensorboardXWriter(log_dir: str, window_size: int = 20, **kwargs)[source]

Bases: fastreid.utils.events.EventWriter

Write all scalars to a tensorboard file.

__init__(log_dir: str, window_size: int = 20, **kwargs)[source]
Parameters
  • log_dir (str) – the directory to save the output events

  • window_size (int) – the scalars will be median-smoothed by this window size

  • kwargs – other arguments passed to torch.utils.tensorboard.SummaryWriter(…)

write()[source]
close()[source]
class fastreid.utils.events.CommonMetricPrinter(max_iter)[source]

Bases: fastreid.utils.events.EventWriter

Print common metrics to the terminal, including iteration time, ETA, memory, all losses, and the learning rate. It also applies smoothing using a window of 20 elements. It’s meant to print common metrics in common ways. To print something in more customized ways, please implement a similar printer by yourself.

__init__(max_iter)[source]
Parameters

max_iter (int) – the maximum number of iterations to train. Used to compute ETA.

write()[source]
class fastreid.utils.events.EventStorage(start_iter=0)[source]

Bases: object

The user-facing class that provides metric storage functionalities. In the future we may add support for storing / logging other types of data if needed.

__init__(start_iter=0)[source]
Parameters

start_iter (int) – the iteration number to start with

put_image(img_name, img_tensor)[source]

Add an img_tensor associated with img_name, to be shown on tensorboard. :param img_name: The name of the image to put into tensorboard. :type img_name: str :param img_tensor: An uint8 or float

Tensor of shape [channel, height, width] where channel is 3. The image format should be RGB. The elements in img_tensor can either have values in [0, 1] (float32) or [0, 255] (uint8). The img_tensor will be visualized in tensorboard.

put_scalar(name, value, smoothing_hint=True)[source]

Add a scalar value to the HistoryBuffer associated with name. :param smoothing_hint: a ‘hint’ on whether this scalar is noisy and should be

smoothed when logged. The hint will be accessible through EventStorage.smoothing_hints(). A writer may ignore the hint and apply custom smoothing rule. It defaults to True because most scalars we save need to be smoothed to provide any useful signal.

put_scalars(*, smoothing_hint=True, **kwargs)[source]

Put multiple scalars from keyword arguments. .. rubric:: Examples

storage.put_scalars(loss=my_loss, accuracy=my_accuracy, smoothing_hint=True)

put_histogram(hist_name, hist_tensor, bins=1000)[source]

Create a histogram from a tensor. :param hist_name: The name of the histogram to put into tensorboard. :type hist_name: str :param hist_tensor: A Tensor of arbitrary shape to be converted

into a histogram.

Parameters

bins (int) – Number of histogram bins.

history(name)[source]
Returns

HistoryBuffer – the scalar history for name

histories()[source]
Returns

dict[name -> HistoryBuffer] – the HistoryBuffer for all scalars

latest()[source]
Returns

dict[str -> (float, int)]

mapping from the name of each scalar to the most

recent value and the iteration number its added.

latest_with_smoothing_hint(window_size=20)[source]

Similar to latest(), but the returned values are either the un-smoothed original latest value, or a median of the given window_size, depend on whether the smoothing_hint is True. This provides a default behavior that other writers can use.

smoothing_hints()[source]
Returns

dict[name -> bool]

the user-provided hint on whether the scalar

is noisy and needs smoothing.

step()[source]

User should either: (1) Call this function to increment storage.iter when needed. Or (2) Set storage.iter to the correct iteration number before each iteration. The storage will then be able to associate the new data with an iteration number.

property iter

Returns: int: The current iteration number. When used together with a trainer,

this is ensured to be the same as trainer.iter.

property iteration
name_scope(name)[source]
Yields

A context within which all the events added to this storage will be prefixed by the name scope.

clear_images()[source]

Delete all the stored images for visualization. This should be called after images are written to tensorboard.

clear_histograms()[source]

Delete all the stored histograms for visualization. This should be called after histograms are written to tensorboard.

fastreid.utils.logger module

fastreid.utils.logger.setup_logger(output=None, distributed_rank=0, *, color=True, name='fastreid', abbrev_name=None)[source]
Parameters
  • output (str) – a file name or a directory to save log. If None, will not save log file. If ends with “.txt” or “.log”, assumed to be a file name. Otherwise, logs will be saved to output/log.txt.

  • name (str) – the root module name of this logger

  • abbrev_name (str) – an abbreviation of the module, to avoid long names in logs. Set to “” to not log the root module in logs. By default, will abbreviate “detectron2” to “d2” and leave other modules unchanged.

fastreid.utils.logger.log_first_n(lvl, msg, n=1, *, name=None, key='caller')[source]

Log only for the first n times. :param lvl: the logging level :type lvl: int :param msg: :type msg: str :param n: :type n: int :param name: name of the logger to use. Will use the caller’s module by default. :type name: str :param key: the string(s) can be one of “caller” or

“message”, which defines how to identify duplicated logs. For example, if called with n=1, key=”caller”, this function will only log the first call from the same caller, regardless of the message content. If called with n=1, key=”message”, this function will log the same content only once, even if they are called from different places. If called with n=1, key=(“caller”, “message”), this function will not log only if the same caller has logged the same message before.

fastreid.utils.logger.log_every_n(lvl, msg, n=1, *, name=None)[source]

Log once per n times. :param lvl: the logging level :type lvl: int :param msg: :type msg: str :param n: :type n: int :param name: name of the logger to use. Will use the caller’s module by default. :type name: str

fastreid.utils.logger.log_every_n_seconds(lvl, msg, n=1, *, name=None)[source]

Log no more than once per n seconds. :param lvl: the logging level :type lvl: int :param msg: :type msg: str :param n: :type n: int :param name: name of the logger to use. Will use the caller’s module by default. :type name: str

fastreid.utils.registry module

class fastreid.utils.registry.Registry(name: str)[source]

Bases: object

The registry that provides name -> object mapping, to support third-party users’ custom modules. To create a registry (e.g. a backbone registry): .. code-block:: python

BACKBONE_REGISTRY = Registry(‘BACKBONE’)

To register an object: .. code-block:: python

@BACKBONE_REGISTRY.register() class MyBackbone():

Or: .. code-block:: python

BACKBONE_REGISTRY.register(MyBackbone)

__init__(name: str)None[source]
Parameters

name (str) – the name of this registry

register(obj: Optional[object] = None) → Optional[object][source]

Register the given object under the the name obj.__name__. Can be used as either a decorator or not. See docstring of this class for usage.

get(name: str)object[source]

fastreid.utils.memory module

fastreid.utils.analysis module

fastreid.utils.visualizer module

fastreid.utils.video_visualizer module