Safety settings
Safety filters
Category | Description |
---|---|
Harassment | Negative or harmful comments targeting identity and/or protected attributes. |
Hate speech | Content that is rude, disrespectful, or profane. |
Sexually explicit | Contains references to sexual acts or other lewd content. |
Dangerous | Promotes, facilitates, or encourages harmful acts. |
Civic integrity | Election-related queries. |
HarmCategory
. The Gemini models only support HARM_CATEGORY_HARASSMENT
, HARM_CATEGORY_HATE_SPEECH
, HARM_CATEGORY_SEXUALLY_EXPLICIT
, HARM_CATEGORY_DANGEROUS_CONTENT
, and HARM_CATEGORY_CIVIC_INTEGRITY
. All other categories are used only by PaLM 2 (Legacy) models.Content safety filtering level
HIGH
, MEDIUM
, LOW
, or NEGLIGIBLE
.1.
2.
Safety filtering per request
HARASSMENT
and harm probability set to HIGH
.Threshold (Google AI Studio) | Threshold (API) | Description |
---|---|---|
Block none | BLOCK_NONE | Always show regardless of probability of unsafe content |
Block few | BLOCK_ONLY_HIGH | Block when high probability of unsafe content |
Block some | BLOCK_MEDIUM_AND_ABOVE | Block when medium or high probability of unsafe content |
Block most | BLOCK_LOW_AND_ABOVE | Block when low, medium or high probability of unsafe content |
N/A | HARM_BLOCK_THRESHOLD_UNSPECIFIED | Threshold is unspecified, block using default threshold |
gemini-1.5-pro-002
and gemini-1.5-flash-002
and all newer stable GA models) or Block some (in all other models) for all categories except the Civic integrity category.gemini-2.0-flash-001
aliased as gemini-2.0-flash
, gemini-2.0-pro-exp-02-05
, and gemini-2.0-flash-lite
) both for Google AI Studio and the Gemini API, and Block most for all other models in Google AI Studio only.HarmBlockThreshold
API reference for details.Safety feedback
generateContent
returns a GenerateContentResponse
which includes safety feedback.promptFeedback
. If promptFeedback.blockReason
is set, then the content of the prompt was blocked.Candidate.finishReason
and Candidate.safetyRatings
. If response content was blocked and the finishReason
was SAFETY
, you can inspect safetyRatings
for more details. The content that was blocked is not returned.Adjust safety settings
Google AI Studio

Gemini API SDKs
GenerateContent
call. This sets the thresholds for the harassment (HARM_CATEGORY_HARASSMENT
) and hate speech (HARM_CATEGORY_HATE_SPEECH
) categories. For example, setting these categories to BLOCK_LOW_AND_ABOVE
blocks any content that has a low or higher probability of being harassment or hate speech. To understand the threshold settings, see Safety filtering per request. echo '{
"safetySettings": [
{"category": "HARM_CATEGORY_HARASSMENT", "threshold": "BLOCK_ONLY_HIGH"},
{"category": "HARM_CATEGORY_HATE_SPEECH", "threshold": "BLOCK_MEDIUM_AND_ABOVE"}
],
"contents": [{
"parts":[{
"text": "'I support Martians Soccer Club and I think Jupiterians Football Club sucks! Write a ironic phrase about them.'"}]}]}' > request.json
curl "https://generativelanguage.googleapis.com/v1beta/models/gemini-2.0-flash:generateContent?key=$GEMINI_API_KEY" \
-H 'Content-Type: application/json' \
-X POST \
-d @request.json 2> /dev/nullsafety_settings.sh