KerasHub Models

KerasHub contains end-to-end implementations of popular model architectures. These models can be created in two ways:

  • Through the from_preset() constructor, which instantiates an object with a pre-trained configurations, vocabularies, and (optionally) weights.
  • Through custom configuration controlled by the user.

Below, we list all presets available in the library. For more detailed usage, browse the docstring for a particular class. For an in depth introduction to our API, see the getting started guide.

Presets

The following preset names correspond to a config and weights for a pretrained model. Any task, preprocessor, backbone or tokenizer from_preset() can be used to create a model from the saved preset.

backbone = keras_hub.models.Backbone.from_preset("bert_base_en")
tokenizer = keras_hub.models.Tokenizer.from_preset("bert_base_en")
classifier = keras_hub.models.TextClassifier.from_preset("bert_base_en", num_classes=2)
preprocessor = keras_hub.models.TextClassifierPreprocessor.from_preset("bert_base_en")
Preset nameModelParametersDescription
albert_base_en_uncasedALBERT11.68M12-layer ALBERT model where all input is lowercased. Trained on English Wikipedia + BooksCorpus. Model Card
albert_large_en_uncasedALBERT17.68M24-layer ALBERT model where all input is lowercased. Trained on English Wikipedia + BooksCorpus. Model Card
albert_extra_large_en_uncasedALBERT58.72M24-layer ALBERT model where all input is lowercased. Trained on English Wikipedia + BooksCorpus. Model Card
albert_extra_extra_large_en_uncasedALBERT222.60M12-layer ALBERT model where all input is lowercased. Trained on English Wikipedia + BooksCorpus. Model Card
bart_base_enBART139.42M6-layer BART model where case is maintained. Trained on BookCorpus, English Wikipedia and CommonCrawl. Model Card
bart_large_enBART406.29M12-layer BART model where case is maintained. Trained on BookCorpus, English Wikipedia and CommonCrawl. Model Card
bart_large_en_cnnBART406.29MThe bart_large_en backbone model fine-tuned on the CNN+DM summarization dataset. Model Card
bert_tiny_en_uncasedBERT4.39M2-layer BERT model where all input is lowercased. Trained on English Wikipedia + BooksCorpus. Model Card
bert_small_en_uncasedBERT28.76M4-layer BERT model where all input is lowercased. Trained on English Wikipedia + BooksCorpus. Model Card
bert_medium_en_uncasedBERT41.37M8-layer BERT model where all input is lowercased. Trained on English Wikipedia + BooksCorpus. Model Card
bert_base_en_uncasedBERT109.48M12-layer BERT model where all input is lowercased. Trained on English Wikipedia + BooksCorpus. Model Card
bert_base_enBERT108.31M12-layer BERT model where case is maintained. Trained on English Wikipedia + BooksCorpus. Model Card
bert_base_zhBERT102.27M12-layer BERT model. Trained on Chinese Wikipedia. Model Card
bert_base_multiBERT177.85M12-layer BERT model where case is maintained. Trained on trained on Wikipedias of 104 languages Model Card
bert_large_en_uncasedBERT335.14M24-layer BERT model where all input is lowercased. Trained on English Wikipedia + BooksCorpus. Model Card
bert_large_enBERT333.58M24-layer BERT model where case is maintained. Trained on English Wikipedia + BooksCorpus. Model Card
bert_tiny_en_uncased_sst2BERT4.39MThe bert_tiny_en_uncased backbone model fine-tuned on the SST-2 sentiment analysis dataset. Model Card
bloom_560m_multiBLOOM559.21M24-layer Bloom model with hidden dimension of 1024. trained on 45 natural languages and 12 programming languages. Model Card
bloom_1.1b_multiBLOOM1.07B24-layer Bloom model with hidden dimension of 1536. trained on 45 natural languages and 12 programming languages. Model Card
bloom_1.7b_multiBLOOM1.72B24-layer Bloom model with hidden dimension of 2048. trained on 45 natural languages and 12 programming languages. Model Card
bloom_3b_multiBLOOM3.00B30-layer Bloom model with hidden dimension of 2560. trained on 45 natural languages and 12 programming languages. Model Card
bloomz_560m_multiBLOOMZ559.21M24-layer Bloom model with hidden dimension of 1024. finetuned on crosslingual task mixture (xP3) dataset. Model Card
bloomz_1.1b_multiBLOOMZ1.07B24-layer Bloom model with hidden dimension of 1536. finetuned on crosslingual task mixture (xP3) dataset. Model Card
bloomz_1.7b_multiBLOOMZ1.72B24-layer Bloom model with hidden dimension of 2048. finetuned on crosslingual task mixture (xP3) dataset. Model Card
bloomz_3b_multiBLOOMZ3.00B30-layer Bloom model with hidden dimension of 2560. finetuned on crosslingual task mixture (xP3) dataset. Model Card
deberta_v3_extra_small_enDeBERTaV370.68M12-layer DeBERTaV3 model where case is maintained. Trained on English Wikipedia, BookCorpus and OpenWebText. Model Card
deberta_v3_small_enDeBERTaV3141.30M6-layer DeBERTaV3 model where case is maintained. Trained on English Wikipedia, BookCorpus and OpenWebText. Model Card
deberta_v3_base_enDeBERTaV3183.83M12-layer DeBERTaV3 model where case is maintained. Trained on English Wikipedia, BookCorpus and OpenWebText. Model Card
deberta_v3_large_enDeBERTaV3434.01M24-layer DeBERTaV3 model where case is maintained. Trained on English Wikipedia, BookCorpus and OpenWebText. Model Card
deberta_v3_base_multiDeBERTaV3278.22M12-layer DeBERTaV3 model where case is maintained. Trained on the 2.5TB multilingual CC100 dataset. Model Card
deeplab_v3_plus_resnet50_pascalvocDeepLabV339.19MDeepLabV3+ model with ResNet50 as image encoder and trained on augmented Pascal VOC dataset by Semantic Boundaries Dataset(SBD)which is having categorical accuracy of 90.01 and 0.63 Mean IoU. Model Card
densenet_121_imagenetDenseNet7.04M121-layer DenseNet model pre-trained on the ImageNet 1k dataset at a 224x224 resolution. Model Card
densenet_169_imagenetDenseNet12.64M169-layer DenseNet model pre-trained on the ImageNet 1k dataset at a 224x224 resolution. Model Card
densenet_201_imagenetDenseNet18.32M201-layer DenseNet model pre-trained on the ImageNet 1k dataset at a 224x224 resolution. Model Card
distil_bert_base_en_uncasedDistilBERT66.36M6-layer DistilBERT model where all input is lowercased. Trained on English Wikipedia + BooksCorpus using BERT as the teacher model. Model Card
distil_bert_base_enDistilBERT65.19M6-layer DistilBERT model where case is maintained. Trained on English Wikipedia + BooksCorpus using BERT as the teacher model. Model Card
distil_bert_base_multiDistilBERT134.73M6-layer DistilBERT model where case is maintained. Trained on Wikipedias of 104 languages Model Card
electra_small_discriminator_uncased_enELECTRA13.55M12-layer small ELECTRA discriminator model. All inputs are lowercased. Trained on English Wikipedia + BooksCorpus. Model Card
electra_small_generator_uncased_enELECTRA13.55M12-layer small ELECTRA generator model. All inputs are lowercased. Trained on English Wikipedia + BooksCorpus. Model Card
electra_base_discriminator_uncased_enELECTRA109.48M12-layer base ELECTRA discriminator model. All inputs are lowercased. Trained on English Wikipedia + BooksCorpus. Model Card
electra_base_generator_uncased_enELECTRA33.58M12-layer base ELECTRA generator model. All inputs are lowercased. Trained on English Wikipedia + BooksCorpus. Model Card
electra_large_discriminator_uncased_enELECTRA335.14M24-layer large ELECTRA discriminator model. All inputs are lowercased. Trained on English Wikipedia + BooksCorpus. Model Card
electra_large_generator_uncased_enELECTRA51.07M24-layer large ELECTRA generator model. All inputs are lowercased. Trained on English Wikipedia + BooksCorpus. Model Card
f_net_base_enFNet82.86M12-layer FNet model where case is maintained. Trained on the C4 dataset. Model Card
f_net_large_enFNet236.95M24-layer FNet model where case is maintained. Trained on the C4 dataset. Model Card
falcon_refinedweb_1b_enFalcon1.31B24-layer Falcon model (Falcon with 1B parameters), trained on 350B tokens of RefinedWeb dataset. Model Card
resnet_18_imagenetResNet11.19M18-layer ResNet model pre-trained on the ImageNet 1k dataset at a 224x224 resolution. Model Card
resnet_50_imagenetResNet23.56M50-layer ResNet model pre-trained on the ImageNet 1k dataset at a 224x224 resolution. Model Card
resnet_101_imagenetResNet42.61M101-layer ResNet model pre-trained on the ImageNet 1k dataset at a 224x224 resolution. Model Card
resnet_152_imagenetResNet58.30M152-layer ResNet model pre-trained on the ImageNet 1k dataset at a 224x224 resolution. Model Card
resnet_v2_50_imagenetResNet23.56M50-layer ResNetV2 model pre-trained on the ImageNet 1k dataset at a 224x224 resolution. Model Card
resnet_v2_101_imagenetResNet42.61M101-layer ResNetV2 model pre-trained on the ImageNet 1k dataset at a 224x224 resolution. Model Card
resnet_vd_18_imagenetResNet11.72M18-layer ResNetVD (ResNet with bag of tricks) model pre-trained on the ImageNet 1k dataset at a 224x224 resolution. Model Card
resnet_vd_34_imagenetResNet21.84M34-layer ResNetVD (ResNet with bag of tricks) model pre-trained on the ImageNet 1k dataset at a 224x224 resolution. Model Card
resnet_vd_50_imagenetResNet25.63M50-layer ResNetVD (ResNet with bag of tricks) model pre-trained on the ImageNet 1k dataset at a 224x224 resolution. Model Card
resnet_vd_50_ssld_imagenetResNet25.63M50-layer ResNetVD (ResNet with bag of tricks) model pre-trained on the ImageNet 1k dataset at a 224x224 resolution with knowledge distillation. Model Card
resnet_vd_50_ssld_v2_imagenetResNet25.63M50-layer ResNetVD (ResNet with bag of tricks) model pre-trained on the ImageNet 1k dataset at a 224x224 resolution with knowledge distillation and AutoAugment. Model Card
resnet_vd_50_ssld_v2_fix_imagenetResNet25.63M50-layer ResNetVD (ResNet with bag of tricks) model pre-trained on the ImageNet 1k dataset at a 224x224 resolution with knowledge distillation, AutoAugment and additional fine-tuning of the classification head. Model Card
resnet_vd_101_imagenetResNet44.67M101-layer ResNetVD (ResNet with bag of tricks) model pre-trained on the ImageNet 1k dataset at a 224x224 resolution. Model Card
resnet_vd_101_ssld_imagenetResNet44.67M101-layer ResNetVD (ResNet with bag of tricks) model pre-trained on the ImageNet 1k dataset at a 224x224 resolution with knowledge distillation. Model Card
resnet_vd_152_imagenetResNet60.36M152-layer ResNetVD (ResNet with bag of tricks) model pre-trained on the ImageNet 1k dataset at a 224x224 resolution. Model Card
resnet_vd_200_imagenetResNet74.93M200-layer ResNetVD (ResNet with bag of tricks) model pre-trained on the ImageNet 1k dataset at a 224x224 resolution. Model Card
mit_b0_ade20k_512MiT3.32MMiT (MixTransformer) model with 8 transformer blocks.
mit_b1_ade20k_512MiT13.16MMiT (MixTransformer) model with 8 transformer blocks.
mit_b2_ade20k_512MiT24.20MMiT (MixTransformer) model with 16 transformer blocks.
mit_b3_ade20k_512MiT44.08MMiT (MixTransformer) model with 28 transformer blocks.
mit_b4_ade20k_512MiT60.85MMiT (MixTransformer) model with 41 transformer blocks.
mit_b5_ade20k_640MiT81.45MMiT (MixTransformer) model with 52 transformer blocks.
mit_b0_cityscapes_1024MiT3.32MMiT (MixTransformer) model with 8 transformer blocks.
mit_b1_cityscapes_1024MiT13.16MMiT (MixTransformer) model with 8 transformer blocks.
mit_b2_cityscapes_1024MiT24.20MMiT (MixTransformer) model with 16 transformer blocks.
mit_b3_cityscapes_1024MiT44.08MMiT (MixTransformer) model with 28 transformer blocks.
mit_b4_cityscapes_1024MiT60.85MMiT (MixTransformer) model with 41 transformer blocks.
mit_b5_cityscapes_1024MiT81.45MMiT (MixTransformer) model with 52 transformer blocks.
gemma_2b_enGemma2.51B2 billion parameter, 18-layer, base Gemma model. Model Card
gemma_instruct_2b_enGemma2.51B2 billion parameter, 18-layer, instruction tuned Gemma model. Model Card
gemma_1.1_instruct_2b_enGemma2.51B2 billion parameter, 18-layer, instruction tuned Gemma model. The 1.1 update improves model quality. Model Card
code_gemma_1.1_2b_enGemma2.51B2 billion parameter, 18-layer, CodeGemma model. This model has been trained on a fill-in-the-middle (FIM) task for code completion. The 1.1 update improves model quality. Model Card
code_gemma_2b_enGemma2.51B2 billion parameter, 18-layer, CodeGemma model. This model has been trained on a fill-in-the-middle (FIM) task for code completion. Model Card
gemma_7b_enGemma8.54B7 billion parameter, 28-layer, base Gemma model. Model Card
gemma_instruct_7b_enGemma8.54B7 billion parameter, 28-layer, instruction tuned Gemma model. Model Card
gemma_1.1_instruct_7b_enGemma8.54B7 billion parameter, 28-layer, instruction tuned Gemma model. The 1.1 update improves model quality. Model Card
code_gemma_7b_enGemma8.54B7 billion parameter, 28-layer, CodeGemma model. This model has been trained on a fill-in-the-middle (FIM) task for code completion. Model Card
code_gemma_instruct_7b_enGemma8.54B7 billion parameter, 28-layer, instruction tuned CodeGemma model. This model has been trained for chat use cases related to code. Model Card
code_gemma_1.1_instruct_7b_enGemma8.54B7 billion parameter, 28-layer, instruction tuned CodeGemma model. This model has been trained for chat use cases related to code. The 1.1 update improves model quality. Model Card
gemma2_2b_enGemma2.61B2 billion parameter, 26-layer, base Gemma model. Model Card
gemma2_instruct_2b_enGemma2.61B2 billion parameter, 26-layer, instruction tuned Gemma model. Model Card
gemma2_9b_enGemma9.24B9 billion parameter, 42-layer, base Gemma model. Model Card
gemma2_instruct_9b_enGemma9.24B9 billion parameter, 42-layer, instruction tuned Gemma model. Model Card
gemma2_27b_enGemma27.23B27 billion parameter, 42-layer, base Gemma model. Model Card
gemma2_instruct_27b_enGemma27.23B27 billion parameter, 42-layer, instruction tuned Gemma model. Model Card
shieldgemma_2b_enGemma2.61B2 billion parameter, 26-layer, ShieldGemma model. Model Card
shieldgemma_9b_enGemma9.24B9 billion parameter, 42-layer, ShieldGemma model. Model Card
shieldgemma_27b_enGemma27.23B27 billion parameter, 42-layer, ShieldGemma model. Model Card
gpt2_base_enGPT-2124.44M12-layer GPT-2 model where case is maintained. Trained on WebText. Model Card
gpt2_medium_enGPT-2354.82M24-layer GPT-2 model where case is maintained. Trained on WebText. Model Card
gpt2_large_enGPT-2774.03M36-layer GPT-2 model where case is maintained. Trained on WebText. Model Card
gpt2_extra_large_enGPT-21.56B48-layer GPT-2 model where case is maintained. Trained on WebText. Model Card
gpt2_base_en_cnn_dailymailGPT-2124.44M12-layer GPT-2 model where case is maintained. Finetuned on the CNN/DailyMail summarization dataset.
llama3_8b_enLLaMA 38.03B8 billion parameter, 32-layer, base LLaMA 3 model. Model Card
llama3_8b_en_int8LLaMA 38.03B8 billion parameter, 32-layer, base LLaMA 3 model with activation and weights quantized to int8. Model Card
llama3_instruct_8b_enLLaMA 38.03B8 billion parameter, 32-layer, instruction tuned LLaMA 3 model. Model Card
llama3_instruct_8b_en_int8LLaMA 38.03B8 billion parameter, 32-layer, instruction tuned LLaMA 3 model with activation and weights quantized to int8. Model Card
llama2_7b_enLLaMA 26.74B7 billion parameter, 32-layer, base LLaMA 2 model. Model Card
llama2_7b_en_int8LLaMA 26.74B7 billion parameter, 32-layer, base LLaMA 2 model with activation and weights quantized to int8. Model Card
llama2_instruct_7b_enLLaMA 26.74B7 billion parameter, 32-layer, instruction tuned LLaMA 2 model. Model Card
llama2_instruct_7b_en_int8LLaMA 26.74B7 billion parameter, 32-layer, instruction tuned LLaMA 2 model with activation and weights quantized to int8. Model Card
vicuna_1.5_7b_enVicuna6.74B7 billion parameter, 32-layer, instruction tuned Vicuna v1.5 model. Model Card
mistral_7b_enMistral7.24BMistral 7B base model Model Card
mistral_instruct_7b_enMistral7.24BMistral 7B instruct model Model Card
mistral_0.2_instruct_7b_enMistral7.24BMistral 7B instruct Version 0.2 model Model Card
opt_125m_enOPT125.24M12-layer OPT model where case in maintained. Trained on BookCorpus, CommonCrawl, Pile, and PushShift.io corpora. Model Card
opt_1.3b_enOPT1.32B24-layer OPT model where case in maintained. Trained on BookCorpus, CommonCrawl, Pile, and PushShift.io corpora. Model Card
opt_2.7b_enOPT2.70B32-layer OPT model where case in maintained. Trained on BookCorpus, CommonCrawl, Pile, and PushShift.io corpora. Model Card
opt_6.7b_enOPT6.70B32-layer OPT model where case in maintained. Trained on BookCorpus, CommonCrawl, Pile, and PushShift.io corpora. Model Card
pali_gemma_3b_mix_224PaliGemma2.92Bimage size 224, mix fine tuned, text sequence length is 256 Model Card
pali_gemma_3b_mix_448PaliGemma2.92Bimage size 448, mix fine tuned, text sequence length is 512 Model Card
pali_gemma_3b_224PaliGemma2.92Bimage size 224, pre trained, text sequence length is 128 Model Card
pali_gemma_3b_448PaliGemma2.92Bimage size 448, pre trained, text sequence length is 512 Model Card
pali_gemma_3b_896PaliGemma2.93Bimage size 896, pre trained, text sequence length is 512 Model Card
phi3_mini_4k_instruct_enPhi-33.82B3.8 billion parameters, 32 layers, 4k context length, Phi-3 model. The model was trained using the Phi-3 datasets. This dataset includes both synthetic data and filtered publicly available website data, with an emphasis on high-quality and reasoning-dense properties. Model Card
phi3_mini_128k_instruct_enPhi-33.82B3.8 billion parameters, 32 layers, 128k context length, Phi-3 model. The model was trained using the Phi-3 datasets. This dataset includes both synthetic data and filtered publicly available website data, with an emphasis on high-quality and reasoning-dense properties. Model Card
roberta_base_enRoBERTa124.05M12-layer RoBERTa model where case is maintained.Trained on English Wikipedia, BooksCorpus, CommonCraw, and OpenWebText. Model Card
roberta_large_enRoBERTa354.31M24-layer RoBERTa model where case is maintained.Trained on English Wikipedia, BooksCorpus, CommonCraw, and OpenWebText. Model Card
xlm_roberta_base_multiXLM-RoBERTa277.45M12-layer XLM-RoBERTa model where case is maintained. Trained on CommonCrawl in 100 languages. Model Card
xlm_roberta_large_multiXLM-RoBERTa558.84M24-layer XLM-RoBERTa model where case is maintained. Trained on CommonCrawl in 100 languages. Model Card
sam_base_sa1bSAMImageSegmenter93.74MThe base SAM model trained on the SA1B dataset. Model Card
sam_large_sa1bSAMImageSegmenter641.09MThe large SAM model trained on the SA1B dataset. Model Card
sam_huge_sa1bSAMImageSegmenter312.34MThe huge SAM model trained on the SA1B dataset. Model Card
stable_diffusion_3_mediumStableDiffusion32.99B3 billion parameter, including CLIP L and CLIP G text encoders, MMDiT generative model, and VAE autoencoder. Developed by Stability AI. Model Card
t5_small_multiT508-layer T5 model. Trained on the Colossal Clean Crawled Corpus (C4). Model Card
t5_base_multiT5012-layer T5 model. Trained on the Colossal Clean Crawled Corpus (C4). Model Card
t5_large_multiT5024-layer T5 model. Trained on the Colossal Clean Crawled Corpus (C4). Model Card
flan_small_multiT508-layer T5 model. Trained on the Colossal Clean Crawled Corpus (C4). Model Card
flan_base_multiT5012-layer T5 model. Trained on the Colossal Clean Crawled Corpus (C4). Model Card
flan_large_multiT5024-layer T5 model. Trained on the Colossal Clean Crawled Corpus (C4). Model Card
vgg_11_imagenet9.22M11-layer vgg model pre-trained on the ImageNet 1k dataset at a 224x224 resolution. Model Card
vgg_13_imagenet9.40M13-layer vgg model pre-trained on the ImageNet 1k dataset at a 224x224 resolution. Model Card
vgg_16_imagenet14.71M16-layer vgg model pre-trained on the ImageNet 1k dataset at a 224x224 resolution. Model Card
vgg_19_imagenet20.02M19-layer vgg model pre-trained on the ImageNet 1k dataset at a 224x224 resolution. Model Card
whisper_tiny_enWhisper37.18M4-layer Whisper model. Trained on 438,000 hours of labelled English speech data. Model Card
whisper_base_enWhisper124.44M6-layer Whisper model. Trained on 438,000 hours of labelled English speech data. Model Card
whisper_small_enWhisper241.73M12-layer Whisper model. Trained on 438,000 hours of labelled English speech data. Model Card
whisper_medium_enWhisper763.86M24-layer Whisper model. Trained on 438,000 hours of labelled English speech data. Model Card
whisper_tiny_multiWhisper37.76M4-layer Whisper model. Trained on 680,000 hours of labelled multilingual speech data. Model Card
whisper_base_multiWhisper72.59M6-layer Whisper model. Trained on 680,000 hours of labelled multilingual speech data. Model Card
whisper_small_multiWhisper241.73M12-layer Whisper model. Trained on 680,000 hours of labelled multilingual speech data. Model Card
whisper_medium_multiWhisper763.86M24-layer Whisper model. Trained on 680,000 hours of labelled multilingual speech data. Model Card
whisper_large_multiWhisper1.54B32-layer Whisper model. Trained on 680,000 hours of labelled multilingual speech data. Model Card
whisper_large_multi_v2Whisper1.54B32-layer Whisper model. Trained for 2.5 epochs on 680,000 hours of labelled multilingual speech data. An improved of whisper_large_multi. Model Card

Note: The links provided will lead to the model card or to the official README, if no model card has been provided by the author.

API Documentation

Albert

Bart

Bert

Bloom

DebertaV3

DeepLabV3 and DeepLabV3Plus

DenseNet

DistilBert

Electra

Falcon

FNet

Gemma

GPT2

Llama

Llama3

Mistral

MiT

OPT

PaliGemma

Phi3

ResNet

Roberta

Segment Anything Model

Stable Diffusion 3

T5

VGG

ViTDet

Whisper

XLMRoberta