Query an LLM and return a structured JSON or list response
query_llm_json(
prompt,
model = "gpt-4o-mini",
temperature = 0,
max_tokens = NULL,
stop = NULL,
top_p = 1,
frequency_penalty = 0,
presence_penalty = 0,
return_list = FALSE,
expect_score = FALSE
)
Character. The user prompt to send to the LLM.
Character. The model to use (default = "gpt-4o-mini").
Numeric. Creativity of response (default = 0).
Integer. Maximum number of tokens in the response (default = NULL = model default).
Optional. String or vector of stop sequences.
Numeric. Nucleus sampling probability (default = 1).
Numeric. Penalize repeated tokens (default = 0).
Numeric. Penalize new topic introduction (default = 0).
Logical. If TRUE, return an R list instead of JSON (default = FALSE).
Logical. If TRUE, instruct the LLM to return a numeric "score" along with text.
A JSON string or R list with fields: prompt, model, response, score (optional), timestamp