Keras 3D CNN застревает, казалось бы, случайным образом во время тренировки на colab

#python #tensorflow #keras #conv-neural-network #google-colaboratory

Вопрос:

Я новичок в использовании keras, colab и глубокого обучения в целом, поэтому приношу извинения за любые ошибки. Я пытаюсь обучить 3D-модель U-сети на наборе данных BraTS, используя keras в Google Colab. Модель продолжает застревать в середине эпохи, казалось бы, случайным образом. Я не могу понять, почему это происходит, подумал, что, возможно, данные слишком велики, и я попытался сделать меньшие размеры пакетов еще меньшими 3D-патчами. Хотя это, кажется, помогает, это не решает проблему. Я не получаю ошибки OOM (во время обучения используется только половина видеопамяти colab, оперативная память почти не используется) на локальном диске colab также всегда остается не менее 20 гигабайт свободного места, когда он застревает. Журнал выполнения не показывает ошибок. Когда он застревает, ноутбук перестает отвечать на запросы, я не могу прервать выполнение или получить доступ к своему диску из обозревателя дисков colab, однако я могу изучить локальный диск виртуальной машины. Я обнаружил, что мне приходится перезагружать виртуальную машину и терять прогресс. Я не пробовал запускать это на своем оборудовании, так как у меня недостаточно мощный графический процессор. Пожалуйста, сообщите мне, если есть какая-либо недостающая информация

Версии

  • Python 3.7.10
  • keras 2.5.0
  • tf 2.5.0

Я использую точную модель из этого репо https://github.com/shalabh147/Brain-Tumor-Segmentation-and-Survival-Prediction-using-Deep-Neural-Networks/blob/master/3d_Unet_v1/3dunet.py

Я передаю данные модели через следующий пользовательский генератор данных. Они предварительно обработаны 128*128*128 исправления, которые были сохранены на моем диске в формате hdf5. Я заметил, что при использовании меньших 64*64*64 патчи снижают вероятность того, что он застрянет, хотя иногда он все еще застревает, и это значительно ухудшает качество прогнозирования, так что мне это не помогает.

 class VolumeDataGenerator(tf.keras.utils.Sequence):
def __init__(self,
            sample_list,
             base_dir,
             batch_size=1,
             shuffle=True,
             dim=(128, 128, 128),
             num_channels=4,
             num_classes=4,
             verbose=1):
    self.batch_size = batch_size
    self.shuffle = shuffle
    self.base_dir = base_dir
    self.dim = dim
    self.num_channels = num_channels
    self.num_classes = num_classes
    self.verbose = verbose
    self.sample_list = sample_list
    self.on_epoch_end()
def on_epoch_end(self):
    'Updates indexes after each epoch'
    self.indexes = np.arange(len(self.sample_list))
    if self.shuffle == True:
        np.random.shuffle(self.indexes)

def __len__(self):
    'Denotes the number of batches per epoch'
    return int(np.floor(len(self.sample_list) / self.batch_size))

def __data_generation(self, list_IDs_temp):
    'Generates data containing batch_size samples'
    # Initialization
    X = np.zeros((self.batch_size, *self.dim,self.num_channels),
                 dtype=np.float64)
    y = np.zeros((self.batch_size, *self.dim,self.num_classes),
                 dtype=np.float64)
    # Generate data
    for i, ID in enumerate(list_IDs_temp):
        # Store sample
        if self.verbose == 1:
            print("Training on: %s" % self.base_dir   ID)
        
        with h5py.File(self.base_dir   ID, 'r') as f:
            X[i] = np.array(f.get("X"))
            label = np.array(f.get("y"))
            label = to_categorical(label, num_classes = 4)
            y[i] = label
    return X, y

def __getitem__(self, index):
    # Generate indexes of the batch
    indexes = self.indexes[
              index * self.batch_size: (index   1) * self.batch_size]
    # Find list of IDs
    sample_list_temp = [self.sample_list[k] for k in indexes]
    # Generate data
    
    X, y = self.__data_generation(sample_list_temp)
    vector_label = y.flatten()
    class_weights = class_weight.compute_class_weight('balanced',np.unique(vector_label),vector_label)
    sample_weights = generate_sample_weights(y, class_weights)
    del class_weights
    del vector_label
    return X, y, sample_weights
 

Here’s the training code. I experimented with different batch sizes with little difference. I also tried dividing my dataset, it works at first but it still gets stuck during the 3rd or 4th sample.

 filepath=HOME_DIR   "/models/saved_model_Adam_q_clean_{epoch:02d}.hdf5"
checkpoint = ModelCheckpoint(filepath, verbose=1, save_best_only=False)
 
csvlog=CSVLogger(HOME_DIR   '/Adam_quick_clean.csv', separator=',', append=True)

log_dir = HOME_DIR   datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)
 
callbacks_list = [checkpoint,csvlog,tensorboard_callback]

batch_size = 4
train_ech = train_list

train_generator = VolumeDataGenerator(train_ech, TRAIN_PATCH_DIR, batch_size= batch_size, dim=(128, 128, 128), verbose=0)
valid_generator = VolumeDataGenerator(valid_list, VALID_PATCH_DIR, batch_size= batch_size, dim=(128, 128, 128), verbose=1)
train_steps=len(train_ech)// batch_size
valid_steps=len(valid_list) // batch_size

nb_epoch = 3
model.fit(train_generator, validation_data=valid_generator, steps_per_epoch = train_steps, validation_steps = valid_steps, workers=1, epochs=nb_epoch, verbose=1, callbacks = callbacks_list)
 

VM Log

 {"pid":1,"type":"jupyter","level":40,"msg":"Config option `delete_to_trash` not recognized by `ColabFileContentsManager`.","time":"2021-06-01T18:34:16.607Z","v":0}
{"pid":1,"type":"jupyter","level":40,"msg":"Config option `delete_to_trash` not recognized by `ColabFileContentsManager`.","time":"2021-06-01T18:34:16.607Z","v":0}
{"pid":1,"type":"jupyter","level":30,"msg":"Writing notebook server cookie secret to /root/.local/share/jupyter/runtime/notebook_cookie_secret","time":"2021-06-01T18:34:16.624Z","v":0}
{"pid":1,"type":"jupyter","level":30,"msg":"Writing notebook server cookie secret to /root/.local/share/jupyter/runtime/notebook_cookie_secret","time":"2021-06-01T18:34:16.626Z","v":0}
{"pid":1,"type":"jupyter","level":30,"msg":"google.colab serverextension initialized.","time":"2021-06-01T18:34:16.676Z","v":0}
{"pid":1,"type":"jupyter","level":30,"msg":"Serving notebooks from local directory: /","time":"2021-06-01T18:34:16.677Z","v":0}
{"pid":1,"type":"jupyter","level":30,"msg":"0 active kernels","time":"2021-06-01T18:34:16.677Z","v":0}
{"pid":1,"type":"jupyter","level":30,"msg":"The Jupyter Notebook is running at:","time":"2021-06-01T18:34:16.678Z","v":0}
{"pid":1,"type":"jupyter","level":30,"msg":"http://172.28.0.2:9000/","time":"2021-06-01T18:34:16.678Z","v":0}
{"pid":1,"type":"jupyter","level":30,"msg":"Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).","time":"2021-06-01T18:34:16.678Z","v":0}
{"pid":1,"type":"jupyter","level":30,"msg":"google.colab serverextension initialized.","time":"2021-06-01T18:34:16.679Z","v":0}
{"pid":1,"type":"jupyter","level":30,"msg":"Serving notebooks from local directory: /","time":"2021-06-01T18:34:16.680Z","v":0}
{"pid":1,"type":"jupyter","level":30,"msg":"0 active kernels","time":"2021-06-01T18:34:16.680Z","v":0}
{"pid":1,"type":"jupyter","level":30,"msg":"The Jupyter Notebook is running at:","time":"2021-06-01T18:34:16.680Z","v":0}
{"pid":1,"type":"jupyter","level":30,"msg":"http://172.28.0.12:9000/","time":"2021-06-01T18:34:16.680Z","v":0}
{"pid":1,"type":"jupyter","level":30,"msg":"Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).","time":"2021-06-01T18:34:16.681Z","v":0}
{"pid":1,"type":"jupyter","level":30,"msg":"Kernel started: 1eeb5167-90d7-4636-8e5f-c31d1d4b5ee8","time":"2021-06-01T18:35:12.000Z","v":0}
{"pid":1,"type":"jupyter","level":30,"msg":"Adapting to protocol v5.1 for kernel 1eeb5167-90d7-4636-8e5f-c31d1d4b5ee8","time":"2021-06-01T18:35:13.676Z","v":0}
{"pid":1,"type":"jupyter","level":40,"msg":"2021-06-01 18:40:33.825520: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0","time":"2021-06-01T18:40:33.825Z","v":0}
{"pid":1,"type":"jupyter","level":40,"msg":"2021-06-01 18:40:39.253120: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcuda.so.1","time":"2021-06-01T18:40:39.253Z","v":0}
{"pid":1,"type":"jupyter","level":40,"msg":"2021-06-01 18:40:39.307495: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero","time":"2021-06-01T18:40:39.307Z","v":0}
{"pid":1,"type":"jupyter","level":40,"msg":"2021-06-01 18:40:39.308467: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1733] Found device 0 with properties: ","time":"2021-06-01T18:40:39.308Z","v":0}
{"pid":1,"type":"jupyter","level":40,"msg":"pciBusID: 0000:00:04.0 name: Tesla T4 computeCapability: 7.5","time":"2021-06-01T18:40:39.308Z","v":0}
{"pid":1,"type":"jupyter","level":40,"msg":"coreClock: 1.59GHz coreCount: 40 deviceMemorySize: 14.75GiB deviceMemoryBandwidth: 298.08GiB/s","time":"2021-06-01T18:40:39.308Z","v":0}
{"pid":1,"type":"jupyter","level":40,"msg":"2021-06-01 18:40:39.308540: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0","time":"2021-06-01T18:40:39.309Z","v":0}
{"pid":1,"type":"jupyter","level":40,"msg":"2021-06-01 18:40:39.440162: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcublas.so.11","time":"2021-06-01T18:40:39.440Z","v":0}
{"pid":1,"type":"jupyter","level":40,"msg":"2021-06-01 18:40:39.440322: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcublasLt.so.11","time":"2021-06-01T18:40:39.440Z","v":0}
{"pid":1,"type":"jupyter","level":40,"msg":"2021-06-01 18:40:39.617512: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcufft.so.10","time":"2021-06-01T18:40:39.617Z","v":0}
{"pid":1,"type":"jupyter","level":40,"msg":"2021-06-01 18:40:39.631474: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcurand.so.10","time":"2021-06-01T18:40:39.631Z","v":0}
{"pid":1,"type":"jupyter","level":40,"msg":"2021-06-01 18:40:39.911965: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcusolver.so.10","time":"2021-06-01T18:40:39.912Z","v":0}
{"pid":1,"type":"jupyter","level":40,"msg":"2021-06-01 18:40:39.933432: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcusparse.so.11","time":"2021-06-01T18:40:39.933Z","v":0}
{"pid":1,"type":"jupyter","level":40,"msg":"2021-06-01 18:40:39.938233: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudnn.so.8","time":"2021-06-01T18:40:39.938Z","v":0}
{"pid":1,"type":"jupyter","level":40,"msg":"2021-06-01 18:40:39.938453: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero","time":"2021-06-01T18:40:39.938Z","v":0}
{"pid":1,"type":"jupyter","level":40,"msg":"2021-06-01 18:40:39.939590: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero","time":"2021-06-01T18:40:39.939Z","v":0}
{"pid":1,"type":"jupyter","level":40,"msg":"2021-06-01 18:40:39.943829: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1871] Adding visible gpu devices: 0","time":"2021-06-01T18:40:39.943Z","v":0}
{"pid":1,"type":"jupyter","level":40,"msg":"2021-06-01 18:40:39.945168: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero","time":"2021-06-01T18:40:39.945Z","v":0}
{"pid":1,"type":"jupyter","level":40,"msg":"2021-06-01 18:40:39.946040: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1733] Found device 0 with properties: ","time":"2021-06-01T18:40:39.946Z","v":0}
{"pid":1,"type":"jupyter","level":40,"msg":"pciBusID: 0000:00:04.0 name: Tesla T4 computeCapability: 7.5","time":"2021-06-01T18:40:39.946Z","v":0}
{"pid":1,"type":"jupyter","level":40,"msg":"coreClock: 1.59GHz coreCount: 40 deviceMemorySize: 14.75GiB deviceMemoryBandwidth: 298.08GiB/s","time":"2021-06-01T18:40:39.946Z","v":0}
{"pid":1,"type":"jupyter","level":40,"msg":"2021-06-01 18:40:39.946165: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero","time":"2021-06-01T18:40:39.947Z","v":0}
{"pid":1,"type":"jupyter","level":40,"msg":"2021-06-01 18:40:39.947157: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero","time":"2021-06-01T18:40:39.947Z","v":0}
{"pid":1,"type":"jupyter","level":40,"msg":"2021-06-01 18:40:39.948108: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1871] Adding visible gpu devices: 0","time":"2021-06-01T18:40:39.948Z","v":0}
{"pid":1,"type":"jupyter","level":40,"msg":"2021-06-01 18:40:39.950891: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0","time":"2021-06-01T18:40:39.951Z","v":0}
{"pid":1,"type":"jupyter","level":40,"msg":"2021-06-01 18:40:44.303233: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1258] Device interconnect StreamExecutor with strength 1 edge matrix:","time":"2021-06-01T18:40:44.303Z","v":0}
{"pid":1,"type":"jupyter","level":40,"msg":"2021-06-01 18:40:44.303284: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1264]      0 ","time":"2021-06-01T18:40:44.303Z","v":0}
{"pid":1,"type":"jupyter","level":40,"msg":"2021-06-01 18:40:44.303299: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1277] 0:   N ","time":"2021-06-01T18:40:44.304Z","v":0}
{"pid":1,"type":"jupyter","level":40,"msg":"2021-06-01 18:40:44.303519: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero","time":"2021-06-01T18:40:44.305Z","v":0}
{"pid":1,"type":"jupyter","level":40,"msg":"2021-06-01 18:40:44.304691: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero","time":"2021-06-01T18:40:44.305Z","v":0}
{"pid":1,"type":"jupyter","level":40,"msg":"2021-06-01 18:40:44.307171: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero","time":"2021-06-01T18:40:44.307Z","v":0}
{"pid":1,"type":"jupyter","level":40,"msg":"2021-06-01 18:40:44.308035: W tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:39] Overriding allow_growth setting because the TF_FORCE_GPU_ALLOW_GROWTH environment variable is set. Original config value was 0.","time":"2021-06-01T18:40:44.308Z","v":0}
{"pid":1,"type":"jupyter","level":40,"msg":"2021-06-01 18:40:44.308096: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1418] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 13837 MB memory) -> physical GPU (device: 0, name: Tesla T4, pci bus id: 0000:00:04.0, compute capability: 7.5)","time":"2021-06-01T18:40:44.308Z","v":0}
{"pid":1,"type":"jupyter","level":40,"msg":"2021-06-01 18:40:45.788788: I tensorflow/core/profiler/lib/profiler_session.cc:126] Profiler session initializing.","time":"2021-06-01T18:40:45.788Z","v":0}
{"pid":1,"type":"jupyter","level":40,"msg":"2021-06-01 18:40:45.788821: I tensorflow/core/profiler/lib/profiler_session.cc:141] Profiler session started.","time":"2021-06-01T18:40:45.789Z","v":0}
{"pid":1,"type":"jupyter","level":40,"msg":"2021-06-01 18:40:45.789041: I tensorflow/core/profiler/internal/gpu/cupti_tracer.cc:1611] Profiler found 1 GPUs","time":"2021-06-01T18:40:45.789Z","v":0}
{"pid":1,"type":"jupyter","level":40,"msg":"2021-06-01 18:40:45.821160: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcupti.so.11.0","time":"2021-06-01T18:40:45.821Z","v":0}
{"pid":1,"type":"jupyter","level":40,"msg":"2021-06-01 18:40:46.036913: I tensorflow/core/profiler/lib/profiler_session.cc:159] Profiler session tear down.","time":"2021-06-01T18:40:46.037Z","v":0}
{"pid":1,"type":"jupyter","level":40,"msg":"2021-06-01 18:40:46.037255: I tensorflow/core/profiler/internal/gpu/cupti_tracer.cc:1743] CUPTI activity buffer flushed","time":"2021-06-01T18:40:46.037Z","v":0}
{"pid":1,"type":"jupyter","level":40,"msg":"2021-06-01 18:41:01.630290: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:176] None of the MLIR Optimization Passes are enabled (registered 2)","time":"2021-06-01T18:41:01.630Z","v":0}
{"pid":1,"type":"jupyter","level":40,"msg":"2021-06-01 18:41:01.631423: I tensorflow/core/platform/profile_utils/cpu_utils.cc:114] CPU Frequency: 2199995000 Hz","time":"2021-06-01T18:41:01.631Z","v":0}
{"pid":1,"type":"jupyter","level":40,"msg":"2021-06-01 18:41:20.776073: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudnn.so.8","time":"2021-06-01T18:41:20.777Z","v":0}
{"pid":1,"type":"jupyter","level":40,"msg":"2021-06-01 18:41:22.896505: I tensorflow/stream_executor/cuda/cuda_dnn.cc:359] Loaded cuDNN version 8004","time":"2021-06-01T18:41:22.896Z","v":0}
{"pid":1,"type":"jupyter","level":40,"msg":"2021-06-01 18:41:47.186323: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcublas.so.11","time":"2021-06-01T18:41:47.186Z","v":0}
{"pid":1,"type":"jupyter","level":40,"msg":"2021-06-01 18:41:49.705521: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcublasLt.so.11","time":"2021-06-01T18:41:49.705Z","v":0}
{"pid":1,"type":"jupyter","level":40,"msg":"2021-06-01 18:42:01.037566: I tensorflow/core/profiler/lib/profiler_session.cc:126] Profiler session initializing.","time":"2021-06-01T18:42:01.037Z","v":0}
{"pid":1,"type":"jupyter","level":40,"msg":"2021-06-01 18:42:01.037614: I tensorflow/core/profiler/lib/profiler_session.cc:141] Profiler session started.","time":"2021-06-01T18:42:01.038Z","v":0}
{"pid":1,"type":"jupyter","level":40,"msg":"2021-06-01 18:42:02.997289: I tensorflow/core/profiler/lib/profiler_session.cc:66] Profiler session collecting data.","time":"2021-06-01T18:42:02.997Z","v":0}
{"pid":1,"type":"jupyter","level":40,"msg":"2021-06-01 18:42:03.002017: I tensorflow/core/profiler/internal/gpu/cupti_tracer.cc:1743] CUPTI activity buffer flushed","time":"2021-06-01T18:42:03.002Z","v":0}
{"pid":1,"type":"jupyter","level":40,"msg":"2021-06-01 18:42:03.193089: I tensorflow/core/profiler/internal/gpu/cupti_collector.cc:673]  GpuTracer has collected 1770 callback api events and 1761 activity events. ","time":"2021-06-01T18:42:03.193Z","v":0}
{"pid":1,"type":"jupyter","level":40,"msg":"2021-06-01 18:42:03.252398: I tensorflow/core/profiler/lib/profiler_session.cc:159] Profiler session tear down.","time":"2021-06-01T18:42:03.252Z","v":0}
{"pid":1,"type":"jupyter","level":40,"msg":"2021-06-01 18:42:03.340239: I tensorflow/core/profiler/rpc/client/save_profile.cc:137] Creating directory: drive/MyDrive/BraTS_dataset/BraTS2020/128_patches_left_right_flipped20210601-184045/train/plugins/profile/2021_06_01_18_42_03","time":"2021-06-01T18:42:03.340Z","v":0}
{"pid":1,"type":"jupyter","level":40,"msg":"2021-06-01 18:42:03.402951: I tensorflow/core/profiler/rpc/client/save_profile.cc:143] Dumped gzipped tool data for trace.json.gz to drive/MyDrive/BraTS_dataset/BraTS2020/128_patches_left_right_flipped20210601-184045/train/plugins/profile/2021_06_01_18_42_03/3b9bd48a0fbc.trace.json.gz","time":"2021-06-01T18:42:03.403Z","v":0}
{"pid":1,"type":"jupyter","level":40,"msg":"2021-06-01 18:42:03.503770: I tensorflow/core/profiler/rpc/client/save_profile.cc:137] Creating directory: drive/MyDrive/BraTS_dataset/BraTS2020/128_patches_left_right_flipped20210601-184045/train/plugins/profile/2021_06_01_18_42_03","time":"2021-06-01T18:42:03.503Z","v":0}
{"pid":1,"type":"jupyter","level":40,"msg":"2021-06-01 18:42:03.515750: I tensorflow/core/profiler/rpc/client/save_profile.cc:143] Dumped gzipped tool data for memory_profile.json.gz to drive/MyDrive/BraTS_dataset/BraTS2020/128_patches_left_right_flipped20210601-184045/train/plugins/profile/2021_06_01_18_42_03/3b9bd48a0fbc.memory_profile.json.gz","time":"2021-06-01T18:42:03.515Z","v":0}
{"pid":1,"type":"jupyter","level":40,"msg":"2021-06-01 18:42:03.541263: I tensorflow/core/profiler/rpc/client/capture_profile.cc:251] Creating directory: drive/MyDrive/BraTS_dataset/BraTS2020/128_patches_left_right_flipped20210601-184045/train/plugins/profile/2021_06_01_18_42_03Dumped tool data for xplane.pb to drive/MyDrive/BraTS_dataset/BraTS2020/128_patches_left_right_flipped20210601-184045/train/plugins/profile/2021_06_01_18_42_03/3b9bd48a0fbc.xplane.pb","time":"2021-06-01T18:42:03.541Z","v":0}
{"pid":1,"type":"jupyter","level":40,"msg":"Dumped tool data for overview_page.pb to drive/MyDrive/BraTS_dataset/BraTS2020/128_patches_left_right_flipped20210601-184045/train/plugins/profile/2021_06_01_18_42_03/3b9bd48a0fbc.overview_page.pb","time":"2021-06-01T18:42:03.541Z","v":0}
{"pid":1,"type":"jupyter","level":40,"msg":"Dumped tool data for input_pipeline.pb to drive/MyDrive/BraTS_dataset/BraTS2020/128_patches_left_right_flipped20210601-184045/train/plugins/profile/2021_06_01_18_42_03/3b9bd48a0fbc.input_pipeline.pb","time":"2021-06-01T18:42:03.541Z","v":0}
{"pid":1,"type":"jupyter","level":40,"msg":"Dumped tool data for tensorflow_stats.pb to drive/MyDrive/BraTS_dataset/BraTS2020/128_patches_left_right_flipped20210601-184045/train/plugins/profile/2021_06_01_18_42_03/3b9bd48a0fbc.tensorflow_stats.pb","time":"2021-06-01T18:42:03.542Z","v":0}
{"pid":1,"type":"jupyter","level":40,"msg":"Dumped tool data for kernel_stats.pb to drive/MyDrive/BraTS_dataset/BraTS2020/128_patches_left_right_flipped20210601-184045/train/plugins/profile/2021_06_01_18_42_03/3b9bd48a0fbc.kernel_stats.pb","time":"2021-06-01T18:42:03.542Z","v":0}