Hello @Layer team. The examples shown use train test split. Anyone who has succeeded in using KFold cv plus layer?
Yeah, I wrapped the entire cross validation loop in a function and added the decorator normally. Right now I'm only returning the model fitted on the last iteration. But I guess it's not much of a problem right now since I'm more concerned with logging the experiments and saving the predictions.
Can you please send a sample of the loop using
from layer.decorators import model @model(name="model-name") def cross_validate_predict(params, X, y, cv, scoring, seed=0): """ params: model parameters X: training features y: training labels cv: validation strategy scoring: scoring function for evaluating each fold's prediction returns model: """ # Define base model, and pass params model = .... layer.log(params) # log parameters # Cross validation loop for fold, (train_index, val_index) in enumerate(cv.split(X, y)): # Fit model, evaluate and generate predictions, etc. You can also log the results # Save predictions to submission file, you also log the dataframe return model
# To run locally model = cross_validate_predict(best_params, X_train, y_train, cv, scoring)
# or use layer.run([ ])