Bias

<aside> ➡️ If we don’t source our dataset with enough rigor, the following biais might appear:

</aside>

<aside> ➡️ We will ensure that our model is not biased by:

</aside>

Overfitting

<aside> ➡️ We will make sure our model does not overfit by…

</aside>

Misuse

<aside> ➡️ We have to remind ourselves that our application could be misused by … to do ….

</aside>

Data leakage

Pick the most relevant one

<aside> ➡️ In a catastrophic scenario, where all of our training dataset were stolen or recovered from our model, the risk would be…

</aside>

OR

<aside> ➡️ We have decided that our training dataset will be fully open-sourced, but before we made sure that…

</aside>

Hacking

<aside> ➡️ If someone found a way to “cheat” our model and make do it make any prediction that it want instead of the real one, the risk would be that …

</aside>