Evaluating your models' predictions
In this guide, we are going to describe the technicalities behind evaluations. The main goal of evaluating your model is to measure its predictive performance.
In the evaluation list view below, you can search evaluations by name by clicking the SEARCH menu option in the top right corner.
The Machine Learning Module provides two different ways to measure your model performance: by creating single evaluations or cross-validation evaluations.
To create a single evaluation, you need two resources: a training set and a test set. You can easily do this by using the 1-click menu option that automatically splits your dataset into a random 80% subset for training and 20% for testing. In sequence you will have 2 ways to make your singles evaluations.
Click EVALUATE A MODEL in the 1-click action menu from the evaluation list view.
This option redirects you to the New Evaluation view where you need to select a model and a testing dataset.
Alternatively, click EVALUATE in the pop up menu from the model list view
By using any of these options, you will be redirected to the New Evaluation view where the model will be pre-filled in the selector and you only have to choose the testing dataset.
If you previously split your original dataset into two subsets (one for training and another for testing) using the 1-click menu option from your dataset view, the Machine Learning Module will automatically select the corresponding testing dataset.
Finally, click the EVALUATE green button to perform the evaluation.
To create a cross-validation evaluation you just need a dataset as input.
Cross-validation evaluations are implementing in the module as a WhizzML script using this k-fold cross-validations algorithm and they can be found in BigML Gallery.
You need 7 steps to make your cross-validation evaluation using WhizzML scripts. Please check this documentation to guide you.