Query an LLM and return a structured JSON or list response

query_llm_json(
  prompt,
  model = "gpt-4o-mini",
  temperature = 0,
  max_tokens = NULL,
  stop = NULL,
  top_p = 1,
  frequency_penalty = 0,
  presence_penalty = 0,
  return_list = FALSE,
  expect_score = FALSE
)

Arguments

prompt

Character. The user prompt to send to the LLM.

model

Character. The model to use (default = "gpt-4o-mini").

temperature

Numeric. Creativity of response (default = 0).

max_tokens

Integer. Maximum number of tokens in the response (default = NULL = model default).

stop

Optional. String or vector of stop sequences.

top_p

Numeric. Nucleus sampling probability (default = 1).

frequency_penalty

Numeric. Penalize repeated tokens (default = 0).

presence_penalty

Numeric. Penalize new topic introduction (default = 0).

return_list

Logical. If TRUE, return an R list instead of JSON (default = FALSE).

expect_score

Logical. If TRUE, instruct the LLM to return a numeric "score" along with text.

Value

A JSON string or R list with fields: prompt, model, response, score (optional), timestamp