New MMC Snap-In Shows Reliability Is Always Crucial

Share Article

Microsoft TechNet reports a new Microsoft Management Console (MMC) snap-in called Windows Reliability and Performance Monitor which provides IT Professionals with the tools to monitor and assess system performance and reliability. Targeted primarily at IT professionals, this release shows that once again reliability is always a key issue, especially when it involves servers. The snap-in tracks changes to the system and compares them to changes in system stability, providing a graphical view of their relationship, as well as providing a graphical interface for customizing performance data collection.

Microsoft TechNet reports a new Microsoft Management Console (MMC) snap-in called Windows Reliability and Performance Monitor which provides IT Professionals with the tools to monitor and assess system performance and reliability. Targeted primarily at IT professionals, this release shows that once again reliability is always a key issue, especially when it involves servers. The snap-in tracks changes to the system and compares them to changes in system stability, providing a graphical view of their relationship, as well as providing a graphical interface for customizing performance data collection.

Both hardware and software makers work hard to insure reliability, with technologies such as distributed computing and failover , and applications written to work as compatibly as possible with hardware, operating systems and other applications. A primary block to reliability, however, which can sometimes be unseen by IT personnel and users alike, is disk file fragmentation. Fragmentation can cause common reliability issues such as system hangs and even crashes. When crucial files are in tens, hundreds or even sometimes thousands of fragments, retrieving a single file becomes a considerable tax on already-strained system resources. As many IT personnel know to well, too much strain will definitely make reliability questionable.

The escalating sizes of today's disks and files have caused file fragmentation to occur at much higher levels than past systems. Couple that with the severely increased demand on servers due to the Web, crucial CRM applications, and modern databases, and without a solution one has a recipe for disaster. On today's servers, a fragmentation solution is mandatory.

Quite in addition to having such a solution, however, attention should be paid to the defragmentation solution as well, especially in regard to site volume and requirements. For most sites, manual defragmentation--the case-by-case launching of a defragmenter when desired or needed--is no longer an option due to fragmentation levels and time required for a defragmenter to run. For many years, defragmenters have been available with scheduling options which would allow specific times for defragmentation to be set, so that defragmentation would occur frequently enough to keep fragmentation under control and at times when impact on system resources wouldn't be an issue.

In the last few years, however, even scheduled defragmentation is starting to become out-of-date. In addition to the extra IT time it takes to set schedules and ensure they are optimum, scheduled defragmenting is being found not to be adequate to keep up with the exponential increase in fragmentation rates. Hence, evolution of defragmentation technology that will work constantly in the background, with no impact on system resources, has begun to appear.

Reliability of today's system, especially servers, is obviously vital. Tools such as the Windows Reliability and Performance Monitor will always be used to check and maintain system reliability. But as a standard action in maintaining system reliability, defragmentation should always be performed.

###

Share article on social media or email:

View article via:

Pdf Print

Contact Author

B Boyers
Boyers Marketing
818-637-2625
Email >