How To Split Dataset Into K-fold Without Loading The Whole Dataset At Once?
I can't load all of my dataset at once, so I used tf.keras.preprocessing.image_dataset_from_directory() in order to load batches of images during training. It works well if I want
Solution 1:
Personally I recommend that you switch to tf.data.Dataset()
.
Not only is it more efficient but it gives you more flexibility in terms of what you can implement.
Say you have images(image_paths
) and labels
as an example.
In that way, you could create a pipeline like:
training_data = []
validation_data = []
kf = KFold(n_splits=5,shuffle=True,random_state=42)
for train_index, val_index in kf.split(images,labels):
X_train, X_val = images[train_index], images[val_index]
y_train, y_val = labels[train_index], labels[val_index]
training_data.append([X_train,y_train])
validation_data.append([X_val,y_val])
Then you could create something like:
for index, _ in enumerate(training_data):
x_train, y_train = training_data[index][0], training_data[index][1]
x_valid, y_valid = validation_data[index][0], validation_data[index][1]
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
train_dataset = train_dataset.map(mapping_function, num_parallel_calls=tf.data.experimental.AUTOTUNE)
train_dataset = train_dataset.batch(batch_size)
train_dataset = train_dataset.prefetch(buffer_size=tf.data.experimental.AUTOTUNE)
validation_dataset = tf.data.Dataset.from_tensor_slices((x_valid, y_valid))
validation_dataset = validation_dataset.map(mapping_function, num_parallel_calls=tf.data.experimental.AUTOTUNE)
validation_dataset = validation_dataset.batch(batch_size)
validation_dataset = validation_dataset.prefetch(buffer_size=tf.data.experimental.AUTOTUNE)
model.fit(train_dataset,
validation_data=validation_dataset,
epochs=epochs,
verbose=2)
Post a Comment for "How To Split Dataset Into K-fold Without Loading The Whole Dataset At Once?"