,
Skip to contents
Causal (or GPT-like) modeling
-
causal_config()
- Returns the configuration of a causal model
-
causal_lp()
- Get the log probability of each element of a vector of words (or phrases) using a causal transformer
-
causal_lp_mats()
- Get a list of matrices with the log probabilities of possible word given its previous context using a causal transformer
-
causal_next_tokens_tbl()
- Get the possible next tokens and their log probabilities its previous context using a causal transformer
-
causal_preload()
- Preloads a causal language model
-
causal_tokens_lp_tbl()
- Get the log probability of each token in a sentence (or group of sentences) using a causal transformer
Masked (or BERT-like) modeling
-
masked_config()
- Returns the configuration of a masked model
-
masked_lp()
- Get the log probability of a target word (or phrase) given a left and right context
-
masked_preload()
- Preloads a masked language model
-
masked_tokens_tbl()
- Get the possible tokens and their log probabilities for each mask in a sentence
Vocabulary and tokenization