finetuner.models module#

class finetuner.models.MLP(input_size, hidden_sizes, bias=True, activation=None, l2=False)[source]#

Bases: finetuner.models._ModelStub

MLP model stub.

Parameters
  • input_size (int) – Size of the input representations.

  • hidden_sizes (List[int]) – A list of sizes of the hidden layers. The last hidden size is the output size.

  • bias (bool) – Whether to add bias to each layer.

  • activation (Optional[str]) – A string to configure activation function, relu, tanh or sigmoid. Set to None for no activation.

  • l2 (bool) – Apply L2 normalization at the output layer.

name: str = 'mlp'#
description: str = 'Simple MLP encoder trained from scratch'#
task: str = 'any'#
output_dim: Optional[int] = '-'#
architecture: str = 'MLP'#
options: Dict[str, Any]#
class finetuner.models.ResNet50[source]#

Bases: finetuner.models._ModelStub

ResNet50 model stub.

name: str = 'resnet50'#
description: str = 'Pretrained on ImageNet'#
task: str = 'image-to-image'#
output_dim: Optional[int] = '2048'#
architecture: str = 'CNN'#
options: Dict[str, Any]#
class finetuner.models.ResNet152[source]#

Bases: finetuner.models._ModelStub

ResNet152 model stub.

name: str = 'resnet152'#
description: str = 'Pretrained on ImageNet'#
task: str = 'image-to-image'#
output_dim: Optional[int] = '2048'#
architecture: str = 'CNN'#
options: Dict[str, Any]#
class finetuner.models.EfficientNetB0[source]#

Bases: finetuner.models._ModelStub

EfficientNetB0 model stub.

name: str = 'efficientnet_b0'#
description: str = 'Pretrained on ImageNet'#
task: str = 'image-to-image'#
output_dim: Optional[int] = '1280'#
architecture: str = 'CNN'#
options: Dict[str, Any]#
class finetuner.models.EfficientNetB4[source]#

Bases: finetuner.models._ModelStub

EfficientNetB4 model stub.

name: str = 'efficientnet_b4'#
description: str = 'Pretrained on ImageNet'#
task: str = 'image-to-image'#
output_dim: Optional[int] = '1280'#
architecture: str = 'CNN'#
options: Dict[str, Any]#
class finetuner.models.OpenAICLIP[source]#

Bases: finetuner.models._ModelStub

OpenAICLIP model stub.

name: str = 'openai/clip-vit-base-patch32'#
description: str = 'Pretrained on text image pairs by OpenAI'#
task: str = 'text-to-image'#
output_dim: Optional[int] = '768'#
architecture: str = 'transformer'#
options: Dict[str, Any]#
class finetuner.models.BERT[source]#

Bases: finetuner.models._ModelStub

BERT model stub.

name: str = 'bert-base-cased'#
description: str = 'Pretrained on BookCorpus and English Wikipedia'#
task: str = 'text-to-text'#
output_dim: Optional[int] = '768'#
architecture: str = 'transformer'#
options: Dict[str, Any]#
class finetuner.models.SentenceTransformer[source]#

Bases: finetuner.models._ModelStub

SentenceTransformer model stub.

name: str = 'sentence-transformers/msmarco-distilbert-base-v3'#
description: str = 'Pretrained BERT, fine-tuned on MS Marco'#
task: str = 'text-to-text'#
output_dim: Optional[int] = '768'#
architecture: str = 'transformer'#
options: Dict[str, Any]#