<aside> ➡️ If we don’t source our dataset with enough rigor, the following biais might appear:
</aside>
<aside> ➡️ We will ensure that our model is not biased by:
</aside>
<aside> ➡️ We will make sure our model does not overfit by…
</aside>
<aside> ➡️ We have to remind ourselves that our application could be misused by … to do ….
</aside>
Pick the most relevant one
<aside> ➡️ In a catastrophic scenario, where all of our training dataset were stolen or recovered from our model, the risk would be…
</aside>
OR
<aside> ➡️ We have decided that our training dataset will be fully open-sourced, but before we made sure that…
</aside>
<aside> ➡️ If someone found a way to “cheat” our model and make do it make any prediction that it want instead of the real one, the risk would be that …
</aside>