,

Skip to contents

Get the log probability of each token in a sentence (or group of sentences) using a causal transformer model.

Usage

causal_tokens_lp_tbl(
  texts,
  model = getOption("pangoling.causal.default"),
  checkpoint = NULL,
  add_special_tokens = NULL,
  config_model = NULL,
  config_tokenizer = NULL,
  batch_size = 1,
  .id = NULL
)

Arguments

texts

Vector or list of texts.

model

Name of a pre-trained model or folder.

checkpoint

Folder of a checkpoint.

add_special_tokens

Whether to include special tokens. It has the same default as the AutoTokenizer method in Python.

config_model

List with other arguments that control how the model from Hugging Face is accessed.

config_tokenizer

List with other arguments that control how the tokenizer from Hugging Face is accessed.

batch_size

Maximum size of the batch. Larges batches speedup processing but take more memory.

.id

Name of the column with the sentence id.

Value

A table with token names (token), log-probability (lp) and optionally sentence id.

Details

A causal language model (also called GPT-like, auto-regressive, or decoder model) is a type of large language model usually used for text-generation that can predict the next word (or more accurately in fact token) based on a preceding context.

If not specified, the causal model that will be used is the one set in specified in the global option pangoling.causal.default, this can be accessed via getOption("pangoling.causal.default") (by default "gpt2"). To change the default option use options(pangoling.causal.default = "newcausalmodel").

A list of possible causal models can be found in Hugging Face website.

Using the config_model and config_tokenizer arguments, it's possible to control how the model and tokenizer from Hugging Face is accessed, see the Python method from_pretrained for details.

In case of errors when a new model is run, check the status of https://status.huggingface.co/

More examples

See the online article in pangoling website for more examples.

See also

Other causal model functions: causal_config(), causal_lp_mats(), causal_lp(), causal_next_tokens_tbl(), causal_preload()

Examples

if (FALSE) { # interactive()
causal_tokens_lp_tbl(
  texts = c("The apple doesn't fall far from the tree."),
  model = "gpt2"
)
}