Fine-tuning
Fine-tuning allows enterprise customers to customize Telesoft's pre-trained medical AI models for specific use cases, specialties, or patient populations. This documentation provides guidance on when and how to use fine-tuning capabilities.
Fine-tuning Overview
Our fine-tuning capability enables you to enhance our base models with your own data, making the AI more effective for your specific context while still leveraging our robust pre-trained medical knowledge.
When to Consider Fine-tuning
- Specialty-specific Applications: Improving performance for cardiology, oncology, neurology, etc.
- Unique Patient Populations: Adapting to specific demographics, comorbidity patterns, or regional variations
- Custom Clinical Workflows: Tailoring outputs to match your institution's specific protocols and pathways
- Rare Condition Focus: Enhancing detection of specific rare conditions relevant to your practice
- Regional Healthcare Patterns: Adapting to local disease prevalence and practice patterns
Benefits of Fine-tuning
Improved Accuracy
5-15% average improvement in domain-specific tasks after fine-tuning with quality data
Customized Outputs
Align AI recommendations with your institution's specific protocols and preferences
Reduced False Positives
More precise predictions tailored to your clinical context
Specialized Expertise
Enhance models with your institution's specialized knowledge and experience
Fine-tuning Approaches
Telesoft offers several approaches to fine-tuning, depending on your specific requirements:
1. Parameter-Efficient Fine-tuning (PEFT)
Our recommended approach for most use cases, PEFT adjusts only a small subset of model parameters while keeping most of the pre-trained weights frozen.
Key Benefits
- Requires less training data (typically 1,000-5,000 examples)
- Lower computational requirements
- Faster training time (hours instead of days)
- Reduces risk of catastrophic forgetting
- Maintains performance on general medical knowledge
Ideal for: Most clinical specialties, adapting to institution-specific patterns, and enhancing performance for specific conditions.
2. Full Model Fine-tuning
Updates all model parameters based on your training data, providing maximum customization but requiring more data and computational resources.
Key Considerations
- Requires large amounts of training data (typically 10,000+ examples)
- Higher computational requirements
- Risk of overfitting to your specific data
- May compromise performance on general medical tasks
- More extensive validation required
Ideal for: Highly specialized applications with extensive proprietary data, or when developing models for novel medical contexts significantly different from general practice.
3. Supervised Fine-tuning with Expert Feedback
Combines your clinical data with expert feedback loops to iteratively improve model performance on specific tasks.
Process
- Initial model generates predictions on your data
- Clinical experts review and correct predictions
- Corrections used to update the model
- Process repeats iteratively to improve performance
Ideal for: High-stakes clinical applications, rare disease detection, and scenarios where explanation quality is critical.
4. Knowledge Base Customization
Rather than modifying model weights, this approach customizes the knowledge sources the model references.
Applications
- Integration of institution-specific clinical guidelines
- Custom formularies and medication preferences
- Integration of proprietary clinical pathways
- Inclusion of recent research not yet in model updates
Ideal for: Organizations with established clinical protocols, formulary restrictions, or specialized treatment algorithms.
⚠️ Important Considerations
Fine-tuning is not always necessary. For many applications, our base models with proper parameter configuration will perform excellently. We recommend starting with our pre-trained models and only pursuing fine-tuning when you identify specific performance gaps relevant to your use case.
Fine-tuning Process
The fine-tuning process involves several key steps:
Step 1: Data Preparation
Prepare and format your clinical data for fine-tuning:
- Collect high-quality, representative examples
- Ensure proper de-identification of PHI
- Format data according to our API specifications
- Split data into training, validation, and test sets
- Validate data quality with our assessment tools
Sample Size Guidelines:
PEFT: 1,000-5,000 examples
Full fine-tuning: 10,000+ examples
Specialty tasks: Minimum 300 examples per condition
Rare conditions: As many examples as available, with data augmentation
Step 2: Model Selection
Choose the appropriate base model for your use case:
- Diagnostic Model: For symptom-based diagnosis and clinical reasoning
- Imaging Models: Specialized for different imaging modalities
- Treatment Model: For therapy recommendations and care planning
- Risk Prediction: For prognosis and outcome prediction
Our team will help you select the optimal base model for your specific requirements.
Step 3: Fine-tuning Configuration
Configure the fine-tuning process with our API:
// Example fine-tuning configuration
const fineTuningJob = await telesoft.fineTuning.create({
baseModel: "telesoft-diagnostic-v2",
approach: "peft", // Options: "peft", "full", "expert_feedback", "knowledge_base"
trainingData: "your-dataset-id",
validationData: "your-validation-dataset-id",
hyperparameters: {
learningRate: 1e-5,
epochs: 3,
batchSize: 16,
warmupSteps: 500,
weightDecay: 0.01
},
specialty: "cardiology", // Optional: medical specialty focus
targetMetrics: {
primary: "f1_score",
minimum: 0.85
},
preserveCapabilities: [
"general_medical_knowledge",
"regulatory_compliance"
]
});
Step 4: Training and Monitoring
Monitor the fine-tuning process:
- Track progress through our dashboard or API
- Monitor metrics on validation data
- Receive notifications for key milestones
- Address any issues flagged during training
// Check fine-tuning job status
const status = await telesoft.fineTuning.status({
jobId: fineTuningJob.id
});
console.log(`Training progress: ${status.progress}%`);
console.log(`Current metrics: ${JSON.stringify(status.metrics)}`);
console.log(`Estimated completion: ${status.estimatedCompletion}`);
Step 5: Evaluation
Evaluate the fine-tuned model's performance:
- Comprehensive performance metrics on test data
- Comparison to base model performance
- Subgroup analysis across demographics
- Error analysis and edge case identification
- Clinical validation with expert review
// Evaluate fine-tuned model
const evaluation = await telesoft.fineTuning.evaluate({
modelId: fineTuningJob.resultModel.id,
testData: "your-test-dataset-id",
compareToBase: true,
subgroupAnalysis: true
});
console.log("Performance increase:", evaluation.improvementSummary);
console.log("Detailed metrics:", evaluation.metrics);
console.log("Subgroup performance:", evaluation.subgroupPerformance);
Step 6: Deployment
Deploy your fine-tuned model to production:
- Deploy through our secure API infrastructure
- Implement in staging environment first
- Conduct A/B testing against base model
- Set up monitoring and feedback loops
- Prepare rollback options if needed
// Deploy fine-tuned model
const deployment = await telesoft.fineTuning.deploy({
modelId: fineTuningJob.resultModel.id,
environment: "staging", // Options: "staging", "production"
rolloutPercentage: 25, // Start with partial traffic
monitoringSettings: {
alertThresholds: {
errorRate: 0.05,
latency: 500
},
performanceTracking: true
}
});
// Use your custom model
const analysis = await telesoft.diagnostics.analyze({
patientData,
modelId: fineTuningJob.resultModel.id // Specify your custom model
});
Case Studies
Cardiology Specialty Practice
Challenge: Improve diagnostic accuracy for complex cardiac arrhythmias
Approach: PEFT fine-tuning with 3,500 annotated ECG examples
Results:
- 12% increase in diagnostic accuracy for complex cases
- 17% reduction in false positives for critical arrhythmias
- Improved classification of borderline cases
- Enhanced detection of rare arrhythmia patterns
Regional Health System
Challenge: Adapt treatment recommendations to local formulary restrictions and protocols
Approach: Knowledge Base Customization with proprietary treatment pathways
Results:
- 95% adherence to institutional protocols
- 87% reduction in non-formulary recommendations
- Incorporated regional antibiotic resistance patterns
- Streamlined integration with existing clinical workflows
Academic Medical Center
Challenge: Enhance detection of subtle findings in pediatric chest X-rays
Approach: Full model fine-tuning with 15,000 expert-annotated images
Results:
- AUROC improved from 0.91 to 0.97 for subtle pneumonia findings
- Sensitivity increased by 14% for early stage conditions
- Enhanced localization precision of abnormalities
- Improved performance across diverse pediatric age groups
Best Practices
Data Quality
- Prioritize data quality over quantity
- Ensure expert validation of training labels
- Include diverse and representative examples
- Balance common and rare conditions appropriately
- Document data provenance and annotation methodology
Training Process
- Start with conservative hyperparameters and gradually refine
- Implement early stopping based on validation performance
- Maintain a holdout test set that remains untouched during development
- Track performance across multiple metrics, not just the primary one
- Document all training decisions and configurations
Evaluation and Validation
- Evaluate on realistically diverse test data
- Include challenging edge cases in evaluation
- Analyze performance across demographic subgroups
- Conduct error analysis to identify patterns
- Implement clinical validation with expert review
- Compare against existing clinical decision support tools
Deployment and Monitoring
- Gradually roll out with increasing traffic percentages
- Monitor performance in production environment
- Collect user feedback systematically
- Establish clear thresholds for intervention
- Plan for periodic retraining and updates
- Document model versions and their performance characteristics
ℹ️ Expert Support
Telesoft provides dedicated ML engineering support for enterprise fine-tuning projects. Our team can guide you through the entire process, from data preparation to deployment and monitoring. Contact us at ml-support@telesoft.us to discuss your specific needs.