- 21 Feb, 2021 1 commit
-
-
Jean Ibarz authored
Fixed a bug in CoordConv layer implementation, where coordinates were not scaled correctly along each dimensions.
-
- 16 Feb, 2021 1 commit
-
-
Jean Ibarz authored
Added CoordConv layer implementation, sourced from [https://raw.githubusercontent.com/uber-research/CoordConv].
-
- 12 Feb, 2021 5 commits
-
-
Jean Ibarz authored
Added utility functions to generate multilabel azimuth encoding from azimuth values, and corresponding test.
-
Jean Ibarz authored
Added some code to do some class balancing, repeating 'center' class inputs, when training the 'left_center_right' model.
-
Jean Ibarz authored
Updated some experiment configuration. Modified log directory name to include the name of the model trained.
-
Jean Ibarz authored
-
Jean Ibarz authored
Removed non-linear activation function 'sigmoid' in models, as my brother told me that it is not appropriate for regression. It effectively improves the training of the 'default' model, while it doesn't change the behavior of 'one_sided' model.
-
- 11 Feb, 2021 13 commits
-
-
Jean Ibarz authored
Added tensorboard to requirements, and added journalization of LOSS when training a model. Updated default experiment configuration.
-
Jean Ibarz authored
-
Jean Ibarz authored
Added a 'one_sided' model, that localize azimuth in [0,180] only, and added code to be able to train this model.
-
Jean Ibarz authored
-
Jean Ibarz authored
-
Jean Ibarz authored
-
Jean Ibarz authored
Renamed 'train2.py' script to 'train.py', and lowered the value of random_shift parameter from 100 to 10 for faster model training.
-
Jean Ibarz authored
-
Jean Ibarz authored
-
Jean Ibarz authored
-
Jean Ibarz authored
Removed randomization process in generate_augmented_labelled_dataset(), at this process is now done inside the model.
-
Jean Ibarz authored
Updated train2.py script to do left and right zero padding when using convolution with mode 'valid'. Stimulus truncature with n_samples, which was incremented for each training iteration, has also been disabled. It should be removed in the future.
-
Jean Ibarz authored
Added parameters in default_model_creator() for on-line data augmentation, random scale and random shift.
-
- 10 Feb, 2021 5 commits
-
-
Jean Ibarz authored
-
Jean Ibarz authored
-
Jean Ibarz authored
Refactoring of generate_augmented_labelled_dataset(). Added a parameter to select the convolution mode ('full', 'valid', by default, or 'same').
-
Jean Ibarz authored
-
Jean Ibarz authored
Added a test to visually check that scipy.signal.convolve modes {'full', 'valid', 'same'} have the correct behavior.
-
- 05 Feb, 2021 2 commits
-
-
jean Ibarz authored
Added GtfLayer(), a Conv1D layer with non trainable kernels who's values are loaded from time-reversed Gammatone filters impulses responses, which are either passed directly or loaded from a directory containing the Gammatone filters. A test is used to verify that the layer can process some input, and some lines can be uncommented to check visually that the filters are correctly applied to the input (a dirac with some silence before and after).
-
jean Ibarz authored
-
- 04 Feb, 2021 3 commits
-
-
jean Ibarz authored
Input of the model has been standardized to BWHF (Batch, Width, Height, Features) shape. RandomScale2DLayer and RandomShift2DLayer has been incorporated into the model in order to apply these two steps of data augmentation on-the-fly during the training of the model, but not during the test of the model.
-
jean Ibarz authored
Fixed bug in RandomScale2DLayer, where the shape must input tensor shape must be evaluated using a dynamic pointer scalar in the case where the batch size is dynamically allocated.
-
jean Ibarz authored
Added two layers to apply random scaling or random shift for tensors of standardized shape BWHF (Batch, Width, Height, Features). When training is False, the layers do not apply random scaling or shifting. However, the random shift layer still apply a right zero padding to ensure that the shape of the layer output is identical during training and testing.
-
- 03 Feb, 2021 2 commits
-
-
Jean Ibarz authored
Added a bank of 20 Gammetone filters, of 512 coefficients and sampling rate 44100hz, covering the frequency range 100-20000hz. Filters are windowed with the right half of a Hann window (of size 1024).
-
Jean Ibarz authored
Added a script to generate Gammatone FIR filters in .wav format, by using the library from Bingo Todd. Because a rectangular windowing produces ringing in the magnitude of the frequency response at low frequencies when the filter have ~512 coefficients, we apply a windowing of the filter impulse response in order to improve the filter quality. After windowing, a normalization is applied to restore the correct magnitude peak in the frequency response domain.
-
- 02 Feb, 2021 1 commit
-
-
Michaël Lauer authored
changed split of filename in order to work on windows and unix file systems
-
- 01 Feb, 2021 7 commits
-
-
Michaël Lauer authored
-
Michaël Lauer authored
-
Michaël Lauer authored
Added set_gpus function in utils to support CPU or GPU configuration. Changed script using previous method to use the new function
-
Jean Ibarz authored
-
Jean Ibarz authored
-
jean Ibarz authored
-
jean Ibarz authored
IRCAM database splitted into raw hrirs, diffuse field compensated hrirs, and max amplitude compensated hrirs.
-