Welcome to Finetuner!#
Fine-tuning with domain specific data can improve the performance on neural search tasks. However, it is non-trivial as it requires a combination of expertise of deep learning and information retrieval.
Finetuner makes this procedure simpler, faster and performant by streamlining the workflow and handling all complexity and infrastructure on the cloud. With Finetuner, you can easily make pre-trained models more performant and production ready.
πPerformance boost: Finetuner delivers SOTA performance on domain specific neural search applications.
π± Simple yet powerful: Easily access features such as 40+ mainstream loss functions, 10+ optimisers, layer pruning, weights freezing, dimensionality reduction, hard-negative mining, cross modality fine-tuning, distributed training.
β All-in-cloud: Manage your runs, experiments and artifacts on Jina Cloud (for free!) without worrying about provisioning resources. You never have to worry about provisioning (cloud) resources! Finetuner handles all related complexity and infrastructure.
Finetuner primarily targets business users and engineers with limited knowledge in Machine Learning, but also attempts to expose lots of configuration options for experienced professionals!
Why do I need it?#
Search quality matters. When you bring a pre-trained model to encode your data to embeddings, you are likely to get irrelevant search results. Pre-trained deep learning models are usually trained on large-scale datasets, that have a different data distribution over your own datasets or domains. This is referred to as a distribution shift.
Finetuner provides a solution to this problem by leveraging a pre-trained model from a large dataset and fine-tuning the parameters of this model on your dataset.
Once fine-tuning is done, you get a model adapted to your domain. This new model leverages better search performance on your-task-of-interest.
Fine-tuning a pre-trained model includes a certain complexity and requires Machine Learning plus domain knowledge (on NLP, Computer Vision e.t.c). Thus, it is a non-trivial task for business owners and engineers who lack the practical deep learning knowledge. Finetuner attempts to address this by providing a simple interface, which can be as easy as:
import finetuner
from docarray import DocumentArray
# Login to Jina ecosystem
finetuner.login()
# Prepare training data
train_data = DocumentArray(...)
# Fine-tune in the cloud
run = finetuner.fit(
model='resnet50', train_data=train_data, epochs=5, batch_size=128,
)
print(run.name)
print(run.logs())
# When ready
run.save_artifact(directory='experiment')
You should see this in your terminal:
π Successfully login to Jina ecosystem!
Run name: vigilant-tereshkova
Run logs:
Training [2/2] βββββββββββββββββββββββββββ 50/50 0:00:00 0:01:08 β’ loss: 0.050
[09:13:23] INFO [__main__] Done β¨ __main__.py:214
INFO [__main__] Saving fine-tuned models ... __main__.py:217
INFO [__main__] Saving model 'tuned_model' in __main__.py:228
/usr/src/app/tuned-models/model ...
INFO [__main__] Pushing saved model to Hubble ... __main__.py:232
[09:13:54] INFO [__main__] Pushed model artifact ID: __main__.py:238
'62972acb5de25a53fdbfcecc'
INFO [__main__] Finished π __main__.py:240
Submitted fine-tuning jobs run efficiently on the Jina Cloud on either CPU or GPU enabled hardware.
Finetuner fully owns the complexity of setting up and maintaining the model training infrastructure plus the complexity of delivering SOTA training methods to production use cases.
Important
Not sure which model to use?
Donβt worry, call finetuner.describe_models()
and we will help you choose the best fit.
Support#
Use Discussions to talk about your use cases, questions, and support queries.
Join our Slack community and chat with other Jina AI community members about ideas.
Join our Engineering All Hands meet-up to discuss your use case and learn Jina AI new features.
When? The second Tuesday of every month
Where? Zoom (see our public events calendar/.ical) and live stream on YouTube
Subscribe to the latest video tutorials on our YouTube channel
Join Us#
Finetuner is backed by Jina AI and licensed under Apache-2.0. We are actively hiring AI engineers, solution engineers to build the next neural search ecosystem in opensource.