Evaluating your models' predictions

In this guide, we are going to describe the technicalities behind evaluations. The main goal of evaluating your model is to measure its predictive performance.


Evaluation Dashboard

In the evaluation list view below, you can search evaluations by name by clicking the SEARCH menu option in the top right corner.

Responsive image

Creating evaluations

The Data Intelligence Module provides two different ways to measure your model performance: by creating single evaluations or cross-validation evaluations.

    Single evaluations

    To create a single evaluation, you need two resources: a training set and a test set. You can easily do this by using the 1-click menu option that automatically splits your dataset into a random 80% subset for training and 20% for testing. In sequence you will have 2 ways to make your singles evaluations.

    1-Click action

    Click EVALUATE A MODEL in the 1-click action menu from the evaluation list view.

    Responsive image

    This option redirects you to the New Evaluation view where you need to select a model and a testing dataset.

    Responsive image

    Alternatively, click EVALUATE in the pop up menu from the model list view

    Responsive image

    By using any of these options, you will be redirected to the New Evaluation view where the model will be pre-filled in the selector and you only have to choose the testing dataset.


    Finally, click the EVALUATE green button to perform the evaluation.

    Responsive image

    Cross-validation

    To create a cross-validation evaluation you just need a dataset as input.

    Cross-validation evaluations are implementing in the module as a WhizzML script using this k-fold cross-validations algorithm and they can be found in BigML Gallery.

    You need 7 steps to make your cross-validation evaluation using WhizzML scripts. Please check this documentation to guide you.