unienv_data.storages.hdf5¶
HDF5Storage
¶
HDF5Storage(single_instance_space: HDF5SpaceType, root: Group, capacity: Optional[int] = None, reduce_io: bool = True, check_file: bool = True)
Bases: SpaceStorage[HDF5BatchType, NumpyArrayType, NumpyDeviceType, NumpyDtypeType, NumpyRNGType]
build_space_from_hdf5_file
staticmethod
¶
build_space_from_hdf5_file(root: Union[Group, Dataset]) -> Tuple[int, Optional[int], HDF5SpaceType]
load_replay_buffer_from_raw_hdf5
staticmethod
¶
load_replay_buffer_from_raw_hdf5(path: Union[str, PathLike]) -> ReplayBuffer[HDF5BatchType, NumpyArrayType, NumpyDeviceType, NumpyDtypeType, NumpyRNGType]
call_function_on_first_dataset
staticmethod
¶
call_function_on_first_dataset(root: Group, function: Callable[[Dataset], Any]) -> Any
call_function_on_every_dataset
staticmethod
¶
call_function_on_every_dataset(root: Group, function: Callable[[Dataset], None]) -> None
get_from
staticmethod
¶
get_from(root: Union[Group, Dataset], single_instance_space: HDF5SpaceType, index: Union[int, slice, Sequence[int], BArrayType], reduce_io: bool = True) -> Any
set_to
staticmethod
¶
set_to(root: Union[Group, Dataset], single_instance_space: HDF5SpaceType, index: Union[int, slice, Sequence[int], BArrayType], value: HDF5BatchType) -> None
create
classmethod
¶
create(single_instance_space, capacity, cache_path=None, multiprocessing: bool = False, initial_capacity: Optional[int] = None, compression: Union[Dict[str, Any], Optional[str]] = None, compression_level: Union[Dict[str, Any], Optional[int]] = None, chunks: Union[Dict[str, Any], Optional[Union[bool, Tuple[int, ...]]]] = None, reduce_io: bool = True, **kwargs) -> HDF5Storage
load_from
classmethod
¶
load_from(path, single_instance_space, *, capacity=None, read_only=True, multiprocessing: bool = False, reduce_io: bool = True, **kwargs) -> HDF5Storage
clear
¶
clear() -> None
Clear all data inside the storage and set the length to 0 if the storage has unlimited capacity. For storages with fixed capacity, this should reset the storage to its initial state.
is_fancy_index
¶
is_fancy_index(index: Any) -> bool
Check if the given index is fancy indexing. Args: index (Any): Index to check Returns: bool: True if fancy indexing, False otherwise
fancy_indexing_to_supported_indexing
¶
fancy_indexing_to_supported_indexing(index: ndarray) -> Tuple[np.ndarray, np.ndarray]
Convert fancy indexing to limited fancy indexing that h5py supports. Args: index (np.ndarray): Fancy indexing array of shape (N,) Returns: np.ndarray: Limited fancy indexing array of shape (M,) np.ndarray: Indices to reconstruct the resulting data to original order from the fetched data, shape (N,)
fancy_indexing_to_supported_set_indexing
¶
fancy_indexing_to_supported_set_indexing(index: ndarray) -> Tuple[np.ndarray, np.ndarray]
Convert fancy indexing to limited fancy indexing that h5py supports for setting values. Args: index (np.ndarray): Fancy indexing array of shape (N,) Returns: np.ndarray: Limited fancy indexing array of shape (M,) np.ndarray: Indices to read data required to construct the limited set data, shape (M,)
io_reduced_indexing
¶
io_reduced_indexing(original_index: ndarray, max_gap_size: int = 8) -> List[Union[slice, np.ndarray, int]]
Convert a supported fancy-index array (sorted, unique values) to a list of slices and fancy indices to reduce IO operations. Args: original_index (np.ndarray): Original indexing array of shape (N,) Returns: List[Union[slice, np.ndarray, int]]: List of slices and fancy indices