
As you move your machine learning (ML) workloads into production, you need to continuously monitor your deployed models and iterate when you observe a deviation in your model performance. When you build a new model, you typically start validating the model offline using historical inference request data. But this data sometimes fails to account for […]
 Source
New for Amazon SageMaker – Perform Shadow Tests to Compare
							
						
			
			
			
Recent Comments