Hello all,

I am interested to conduct a k-fold validation for an algorithm that uses
TDB as its database. The stored graph is weighted based on some criteria.
The point is that when performing k-fold cross validation I have for each
iteration (k-times) to create the TDP repo, to load the training models, to
weight the graph, calculate the Precision of the algorithm with the
remaining test models, delete the complete graph again, and so it iterates
for each step.

My question is if I have to completely delete for each time all the files
and create a new dataset for each iteration? Or, is there maybe any other
more appropriate way to perform k-fold cross-validation with a TDB?

Thanks.

Reply via email to