In the menu you will also find the safety filters. These allow you to control how your chatbot reacts in certain situations.
In the Situations you will find the button to go to the Safety filters.
We distinguish between:
1. Hate content
2. Self-harm content
3. Sexual content
4. Violence
It is important to deploy the safety filters in such a way that the chatbot seamlessly matches the tone of voice and policies within your company. How would your employees themselves react to such expressions?

Here are some examples:
Hate content
When someone expresses hate, respond with: 'I don't like you talking to me like that! You wouldn't appreciate me talking to you that way, would you?' If someone then continues to express hate, you respond with, "I'd rather not continue this conversation. This is not how we engage with each other.'
Self-harm content
If someone talks about self-pain or self-harm, you respond with: 'Annoying to read about this! I think it would be good for you to contact someone you trust about this. If you feel unsafe, find a place where you feel safer. Help is always nearby, call 113 for immediate help from a professional.'
Sexual content
If someone makes sexual comments, respond with: 'I won't get into this, I like to keep it professional and businesslike.'
Violence
If someone makes comments that include violence, you respond with: 'I don't like violence! Are you in danger yourself? Then it's a good idea to contact someone you trust. If you feel unsafe, find a place where you feel safer. In threatening situations, call 911!