quaterion.train.cache.cache_config module¶
- class CacheConfig(cache_type: ~quaterion.train.cache.cache_config.CacheType | None = CacheType.AUTO, mapping: ~typing.Dict[str, ~quaterion.train.cache.cache_config.CacheType] = <factory>, key_extractors: KeyExtractorType | ~typing.Dict[str, KeyExtractorType] = <factory>, batch_size: int | None = 32, num_workers: int | None = None, save_dir: str | None = None)[source]¶
Bases:
object
Determine cache settings.
This class should be passed to
configure_caches()
- batch_size: int | None = 32¶
Batch size to be used in CacheDataLoader during caching process. It does not affect others training stages.
- key_extractors: KeyExtractorType | Dict[str, KeyExtractorType]¶
Mapping of encoders to key extractor functions required to cache non-hashable objects.
- num_workers: int | None = None¶
Num of workers to be used in CacheDataLoader during caching process. It does not affect others training stages.
- save_dir: str | None = None¶
If provided, cache fill be saved to the given directory and re-used between launches
- class CacheType(value)[source]¶
Bases:
str
,Enum
Available tensor devices to be used for caching.
- AUTO = 'auto'¶
Use CUDA if it is available, else use CPU.
- CPU = 'cpu'¶
Tensors device is CPU.
- GPU = 'gpu'¶
Tensors device is GPU.
- NONE = 'none'¶
Disable cache
- KeyExtractorType¶
Type of function to extract hash value from the input object. Required if there is no other way to distinguish values for caching
alias of
Callable
[[Any
],Hashable
]