---
title: "Getting started with quallmer"
output: rmarkdown::html_vignette
vignette: >
  %\VignetteIndexEntry{Getting started with quallmer}
  %\VignetteEngine{knitr::rmarkdown}
  %\VignetteEncoding{UTF-8}
---

```{r, include = FALSE}
knitr::opts_chunk$set(
  collapse = TRUE,
  comment = "#>"
)
```

The `quallmer` package helps qualitative researchers leverage the power of large language models for tasks such as coding, annotation, and thematic analysis. It is user-friendly and does not require extensive programming knowledge, making it accessible to researchers from various backgrounds.

Our tutorials provide a brief introduction to the `quallmer` package, which is designed to facilitate the use of large language models (LLMs) for qualitative research tasks. The package relies on the `ellmer` package for LLM interactions, providing a seamless interface for users to work with different LLM providers. For more information on the `ellmer` package and supported LLM interactions, please refer to its documentation [here](https://ellmer.tidyverse.org/index.html).

## Basic usage

The `quallmer` package is developed for using it in R. Please make sure you have a recent version of [R and RStudio installed](https://posit.co/download/rstudio-desktop/) on your computer. If you are new to R and RStudio, you can find [a great and free-of-charge 1.5h introduction to R and RStudio on instats](https://instats.org/seminar/introduction-to-r-with-rstudio-free-1-h3). 

To get started with `quallmer`, you first need to install the package from GitHub. 

```{r, eval = FALSE}
# If you don't have pak installed yet, uncomment and run the following line:
# install.packages("pak")
# Then, install quallmer using pak:
pak::pak("quallmer/quallmer")
```

Then, you can load the package and begin using its functions. 
```{r setup}
library(quallmer)
```

## The quallmer workflow

The typical quallmer workflow consists of five steps:

1. **Define** your codebook with `qlm_codebook()`
2. **Code** your data with `qlm_code()`
3. **Replicate** with different settings using `qlm_replicate()`
4. **Compare** results with `qlm_compare()`
5. **Document** everything with `qlm_trail()`

For a hands-on introduction with code examples, see [**The quallmer workflow**](https://quallmer.github.io/quallmer/articles/pkgdown/getting-started/workflow.html).

## Setting up LLM access

Before using large language models, you need to set up access to an LLM provider:

1. [**Signing up for an OpenAI API key**](https://quallmer.github.io/quallmer/articles/pkgdown/getting-started/openai.html): Obtain an API key from OpenAI to use models like GPT-4o.

2. [**Working with an open-source Ollama model**](https://quallmer.github.io/quallmer/articles/pkgdown/getting-started/ollama.html): Use open-source models locally with Ollama.

The `quallmer` package supports multiple LLM providers through the `ellmer` package. For more information, see the [ellmer documentation](https://ellmer.tidyverse.org/index.html).
