# Hinge Loss

From our SVM model, we know that hinge $$loss = max(0, 1- y\*f(x))$$.

Looking at the graph for SVM in Fig 4, we can see that for $$y*f(x) \geq 1$$, hinge loss is ‘**0**’. However, when $$y*f(x) < 1$$, then hinge loss increases massively. As $$y*f(x)$$ increases with every misclassified point (very wrong points in Fig 5), the upper bound of hinge loss $${1- y*f(x)}$$ also increases exponentially.

Hence, the points that are farther away from the decision margins have a greater loss value, thus penalising those points.

*Conclusion*: This is just a basic understanding of what loss functions are and how hinge loss works. I will be posting other articles with greater understanding of ‘Hinge loss’ shortly


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://ztlevi.gitbook.io/ml-101/loss/hinge_loss.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
