RobertaBackbone model
- Original Link : https://keras.io/api/keras_nlp/models/roberta/roberta_backbone/
- Last Checked at : 2024-11-26
RobertaBackbone
class
keras_nlp.models.RobertaBackbone(
vocabulary_size,
num_layers,
num_heads,
hidden_dim,
intermediate_dim,
dropout=0.1,
max_sequence_length=512,
dtype=None,
**kwargs
)
A RoBERTa encoder network.
This network implements a bi-directional Transformer-based encoder as described in “RoBERTa: A Robustly Optimized BERT Pretraining Approach”. It includes the embedding lookups and transformer layers, but does not include the masked language model head used during pretraining.
The default constructor gives a fully customizable, randomly initialized
RoBERTa encoder with any number of layers, heads, and embedding
dimensions. To load preset architectures and weights, use the from_preset()
constructor.
Disclaimer: Pre-trained models are provided on an “as is” basis, without warranties or conditions of any kind. The underlying model is provided by a third party and subject to a separate license, available here.
Arguments
- vocabulary_size: int. The size of the token vocabulary.
- num_layers: int. The number of transformer layers.
- num_heads: int. The number of attention heads for each transformer. The hidden size must be divisible by the number of attention heads.
- hidden_dim: int. The size of the transformer encoding layer.
- intermediate_dim: int. The output dimension of the first Dense layer in a two-layer feedforward network for each transformer.
- dropout: float. Dropout probability for the Transformer encoder.
- max_sequence_length: int. The maximum sequence length this encoder can
consume. The sequence length of the input must be less than
max_sequence_length
default value. This determines the variable shape for positional embeddings. - dtype: string or
keras.mixed_precision.DTypePolicy
. The dtype to use for model computations and weights. Note that some computations, such as softmax and layer normalization, will always be done at float32 precision regardless of dtype.
Examples
input_data = {
"token_ids": np.ones(shape=(1, 12), dtype="int32"),
"padding_mask": np.array(
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0], shape=(1, 12)),
}
# Pretrained RoBERTa encoder
model = keras_hub.models.RobertaBackbone.from_preset("roberta_base_en")
model(input_data)
# Randomly initialized RoBERTa model with custom config
model = keras_hub.models.RobertaBackbone(
vocabulary_size=50265,
num_layers=4,
num_heads=4,
hidden_dim=256,
intermediate_dim=512,
max_sequence_length=128,
)
model(input_data)
from_preset
method
RobertaBackbone.from_preset(preset, load_weights=True, **kwargs)
Instantiate a keras_hub.models.Backbone
from a model preset.
A preset is a directory of configs, weights and other file assets used
to save and load a pre-trained model. The preset
can be passed as a
one of:
- a built-in preset identifier like
'bert_base_en'
- a Kaggle Models handle like
'kaggle://user/bert/keras/bert_base_en'
- a Hugging Face handle like
'hf://user/bert_base_en'
- a path to a local preset directory like
'./bert_base_en'
This constructor can be called in one of two ways. Either from the base
class like keras_hub.models.Backbone.from_preset()
, or from
a model class like keras_hub.models.GemmaBackbone.from_preset()
.
If calling from the base class, the subclass of the returning object
will be inferred from the config in the preset directory.
For any Backbone
subclass, you can run cls.presets.keys()
to list
all built-in presets available on the class.
Arguments
- preset: string. A built-in preset identifier, a Kaggle Models handle, a Hugging Face handle, or a path to a local directory.
- load_weights: bool. If
True
, the weights will be loaded into the model architecture. IfFalse
, the weights will be randomly initialized.
Examples
# Load a Gemma backbone with pre-trained weights.
model = keras_hub.models.Backbone.from_preset(
"gemma_2b_en",
)
# Load a Bert backbone with a pre-trained config and random weights.
model = keras_hub.models.Backbone.from_preset(
"bert_base_en",
load_weights=False,
)
Preset name | Parameters | Description |
---|---|---|
roberta_base_en | 124.05M | 12-layer RoBERTa model where case is maintained.Trained on English Wikipedia, BooksCorpus, CommonCraw, and OpenWebText. |
roberta_large_en | 354.31M | 24-layer RoBERTa model where case is maintained.Trained on English Wikipedia, BooksCorpus, CommonCraw, and OpenWebText. |
xlm_roberta_base_multi | 277.45M | 12-layer XLM-RoBERTa model where case is maintained. Trained on CommonCrawl in 100 languages. |
xlm_roberta_large_multi | 558.84M | 24-layer XLM-RoBERTa model where case is maintained. Trained on CommonCrawl in 100 languages. |
token_embedding
property
keras_nlp.models.RobertaBackbone.token_embedding
A keras.layers.Embedding
instance for embedding token ids.
This layer embeds integer token ids to the hidden dim of the model.