# Parameters

## Overview&#x20;

When you send a request to the text model, you can specify custom parameters that affect the model's response. These parameters can control your model usage, optimize your requests, and achieve more creative or exact results.

### Possible Parameters

### Maximum Tokens

This parameter specifies the maximum number of tokens (words or pieces of words) that the model will generate in response to the prompt. A lower number of tokens typically results in faster response times.

```sh
maxOutputTokens: 4096 # Limit the model to generate up to 50 tokens.
```

### Temperature

The temperature controls the randomness of the model's output. Setting it to 0 results in deterministic output, whereas higher values up to 1 introduce more variation and creativity in responses.

```sh
temperature: 0.7 # Sets a balance between randomness and determinism.
```

### Top P (Nucleus Sampling)

The top P parameter, also known as nucleus sampling, filters the model's token choices such that the cumulative probability of the tokens considered at each step is at least P. This method allows for more dynamic and contextually relevant responses.

```bash
topP: 0.9 ## Only tokens that contribute to the top 90% cumulative probability are considered.
```

### Top K  (Tok-K Sampling)

Top K limits the model's choices to the K most likely next tokens. Lower values can speed up generation and may improve coherency by focusing on the most probable tokens.

```bash
top_k = 40  # The model will only consider the top 40 most probable next tokens.
```

### Repetitation Penalty

This parameter discourages the model from repeating the same line or phrase, promoting more diverse and engaging content.

```bash
repetition_penalty = 1.2  # Applies a penalty to discourage repetition.
```

### &#x20;**Content Moderation Parameters**

You can enable content moderation for both input and output by adding these parameters to your requests:

```json
{
  "input_moderation": true,
  "output_moderation": true
}
```

For more details on content moderation, see the[ Content Moderation page](/kidjig-docs/api-provider/text-models-llm/content-moderation.md).


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://kidjig.gitbook.io/kidjig-docs/api-provider/text-models-llm/parameters.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
