SUNNYVALE, Calif. (PRWEB) November 13, 2018
ParallelM, the leader in MLOps (machine learning operationalization), today announced new features to its flagship solution, MCenter™, that support a wider variety of data science tools, add advanced ML Health functionality, and better align with AI-first companies and AI service providers.
“By partnering with our customers and understanding what they need to automate and scale MLOps in production, we have made significant advances in MCenter,” said Nisha Talagala, CTO, ParallelM. “With these new innovations in ML Health and support for AI-first companies, we continue to lead the MLOps revolution with the most robust platform on the market.”
MCenter delivers a unique approach to MLOps, addressing ML production issues head-on by automating ML-optimized continuous deployment and integration. MCenter ensures the quality and performance of live ML applications while empowering data science and operations teams with innovative visualizations to manage ML applications over time. With MCenter, business teams can mitigate risk, ensure compliance, and assess and optimize the ROI of their AI initiatives. By providing a single, unified software solution for the full ML production lifecycle, MCenter enables companies to move confidently into the critical phase of realizing and scaling ML business value.
The latest updates to MCenter deliver:
Enhanced Support for Data Science Languages and Tools: MCenter has released new and enhanced integrations to support more data science platforms and services including:
- Programming Languages – R, Python, Java, and Scala
- Data Science Tools – Jupyter, DataRobot, H2O, Cloudera DSW, IBM DSX and more
- Deployment Engines – Spark, Tensorflow, PyTorch, Flink, and both Python and R deployed natively in Docker containers.
Next Generation ML Health: Putting ML models into production is only the beginning. Once in production, models must be continuously monitored to make sure they produce quality predictions. MCenter's advanced ML Health capabilities ensure that models in production are operating as expected, and include:
- Automatic Data Asymmetry Detection – High levels of asymmetry between training and production data sets could lead to increased prediction errors. For example, if a machine learning/deep learning model that trained on data with a feature that contains mostly the categories ‘rainy’ and ‘sunny’, and a few rare cases of ‘cloudy’, it starts to see ‘cloudy’ all the time in production, then accurate classification is unlikely.
- Automatic Data Deviation / Drift Detection – Over time, production data can drift apart from training data, reducing the accuracy of predictions. MCenter automatically detects data deviation for every feature so you know when retraining is needed.
- Production Watchdog Models – A model used in production should accurately predict outcomes across a variety of situations. However, for some data patterns, the model may perform poorly. An additional production “watchdog” model is trained to know when the primary production model is likely to perform poorly. When production data contains too many of these instances, then the primary model can be retrained and tuned to accommodate the new data profile.
- Canary Models – MCenter also supports using canary (control) pipelines that compare the sophisticated primary algorithm/model with a less sophisticated but established control algorithm.
- Automatic Configuration – MCenter automatically learns the thresholds for deviations and setting alerts for your models, so you don't need to establish and reset limits in the system.
- Diagnostic Intelligence Layer – With multiple ML Health techniques available, MCenter’s new diagnostic intelligence layer determines the combinations of ML Health indicators that highlight a real issue. This helps prevent false alarms that waste time and could reduce performance by taking models offline unnecessarily.
MLApp Replication for AI-First Services:
- AI-first companies often maintain the same basic model for multiple clients. With limited data science resources, these companies often have to choose between supporting existing ML applications or building new ones. For companies running the same basic structure for multiple clients, MCenter provides the ability to create a core MLApp (Machine Learning Application) that is then replicated and deployed with a unique configuration per customer. With this structure, each related MLApp trains on each customer's data may have unique tuning parameters, can run on a different schedule, and produce a unique inference model. However, when the core MLApp is revised, the related MLApps are also updated with the latest code and optimizations. This ability to maintain multiple related MLApps allows ML service providers and AI-first companies to scale their operations and frees their data science resources to create new, innovative services.
MCenter’s new features are available immediately to all current customers. For more information, visit http://www.parallelm.com.
ParallelM is the first and only company completely focused on delivering machine learning operationalization (MLOps) at scale. ParallelM’s breakthrough MCenter™ solution is built specifically to power the deployment, optimization, and governance of machine learning pipelines in production so that companies can scale machine learning across their business applications. ParallelM’s approach is that of a single, unified MLOps solution that embeds best practice processes in technology, enabling all ML stakeholders to unlock the business value of AI. Please visit http://www.parallelm.com or email us at email@example.com.
ParallelM and MCenter are trademarks of Parallel Machines, Inc. All other trademarks are the property of their respective registered owners. Trademark use is for identification only and does not imply sponsorship, affiliation, or endorsement.
MLOps (a compound of “machine learning” and “operationalization”) is the practice of operationalizing and managing the lifecycle of ML in production. MLOps establishes a culture and environment where ML technologies can generate business benefits by optimizing the ML lifecycle to automate and scale ML initiatives and optimized business return of ML in production. MLOps enables collaboration across diverse users (such as Data Scientists, Data Engineers, Business Analysts and ITOps) on ML operations and enables a data-driven continuous optimization of ML operations’ impact or ROI (Return on Investment) to business applications. For more information, visit MLOps.org.