Quantcast
Channel: Blogging @Condusiv

Ransomware Protection Tips

$
0
0

You hope that your systems never get attacked by Ransomware, but in case you do, you want to be prepared. One of the best ways to recover from such a malicious attack is to ensure you keep good and recent backups of your systems. But even with that, you can only recover back to the last known good backup. What about the files worked on since that last good backup? To fully recover from a Ransomware attack, you want those files recovered too. This is where Undelete® instant file recovery software can help, when set up properly. 

Undelete can provide a further level of recovery with its versioning and deleted file protection 

Undelete’s versioning capability can keep copies of files worked on since that last backup, plus any files created and deleted since that last backup.  This can help you recover new or updated files since the last backup completed. This latter feature of capturing deleted files can be extremely beneficial as there are some variants of Ransomware that copy the original files to an encrypted form, then delete the original file. In these cases, many of the deleted original files may be in Undelete Recovery Bin and available for recovery.  

But what about protecting the Undelete Recovery Bin from the Ransomware attack?

This is where the Common Recovery Bin feature can help.  By default, Undelete creates a Recovery Bin folder on each volume that it is protecting. All the versioned and deleted files from each volume are stored in the Recovery Bin folder on the respective volumes. With the Common Recovery Bin feature, you can select a single location on a different volume that will contain all of the versioned and deleted files from all of your protected volumes.  For example, you may want to set up a dedicated X: volume that contains the Recovery Bin files from all of the protected volumes. So, even if your main system volumes get affected by Ransomware, these other volumes may remain safe.  This is not a fail-safe protection against Ransomware, but just another deterrent against these Recovery Bin files getting infected. 

 

 

Athough you may have purchased or tried Undelete for its file recovery features for accidental user file deletions from local or network shares, it can also provide added recovery benefits from malicious attacks.

If you need additional Undelete licenses, you can contact your account manager or buy instantly online.

You can also download a free 30-day trial of Undelete.

Learn more about Undelete from this series of videos.


How to Recover Deleted Files from Network Shares

$
0
0

You may have discovered—and too late—that while you can recover some deleted files from the Windows Recycle Bin on local machines, you cannot recover deleted files (accidentally or otherwise) from network drive shared folders. If you delete a file from a network share, it is gone. If you look in the Recycle Bin, it won’t be there. 

This happens because Windows is organized so that deleted files can be captured by the Windows Recycle bin on local drives only. If a user deletes a file on a server from a network shared folder, it isn’t being deleted from the local machine, so the Recycle Bin does not capture it. This is also true of files deleted from attached or removable drives, and files deleted from applications or the Command Prompt. Only files deleted from File Explorer on a machine’s local drive will be saved by the Recycle Bin.

With some types of software, you might be able to recover an earlier saved version of a file deleted from a network shared folder, which would give you the version prior to the deletion. Failing this, the only other way to recover a file deleted from a network share (without a third-party solution—see below) is to have your system administrator retrieve an earlier saved version of the file from the most recent backup. This will only work if:

 

a) A version of the file was actually backed up

b) You can recall the file name so that the system administrator can find it

c) You can recall with some accuracy the time and date when the file was saved. 

 

This method is, of course, extremely time consuming for the sys admin—and for you, too, if you have to wait. 

Even if the previous version can be retrieved, any work done on the file since the last save is lost forever. 

 

Problem Solved: Undelete

Fortunately, there is a very easy and cost-effective solution to this perpetual issue: Undelete® Instant Data Recovery software from Condusiv. 

1. To permanently solve this problem site-wide, download and install Undelete Server, which is extremely fast and simple, and doesn’t require a reboot to complete the installation (something you really don’t want to have to do on a server running databases or applications requiring constant uptime). 

 

 

2. Following installation, the first thing you’ll notice is that the Windows Recycle Bin has been replaced by the Undelete Recovery Bin. The Recovery Bin will not only capture files deleted from network shares, but also files overwritten on the user’s drive, files deleted between backups, and files deleted from the Command Prompt. 

3. Test it for yourself. Create a test file within a network drive shared folder and delete it. You’ll see that your file has, as you would expect, disappeared from the server as well. 

4. Open the Undelete Recovery Bin. You’ll be able to easily navigate to the shared folder from which you deleted the file—and there you’ll find it again. (if you are not an Admin, see Undelete Client below)

 

 

5. You can then select that file and recover it back to its original location, or even to a new location. 

 

 

 

 6. You’re done! That’s how easy it is.

 

Undelete Client

The above example demonstrates a user opening Undelete Server on the respective server to recover the file. Users, however, may not have access to the server, but a system administrator can certainly log on and open Undelete Server to recover the file. 

However, once Undelete is installed on a system, a user can open Undelete on the remote Network Share, follow the above steps, and view and recover their own files. 

 

Buy Undelete Instant Data Recovery now, and always be able to recover deleted files from network shares. 

 

Purchase Online Now https://www.condusiv.com/purchase/Undelete/

 

Request a Volume Quote https://learn.condusiv.com/Volume-Licensing-Undelete.html

 

Download a Free trial https://learn.condusiv.com/LP-Trialware-Undelete.html

What Condusiv’s Diskeeper Does for Me

$
0
0

I'm a person who uses what he has. For example, my friend Roger purchased a fancy new keyboard, but only uses it on "special occasions" because he wants to keep the hardware in pristine condition. That isn't me--my stuff wears out because I rely on it and use it continuously.

To this point, the storage on my Windows 10 workstation computer takes a heavy beating because I read and write data to my hard drives every single day. I have quite an assortment of fixed drives on this machine:

·         mechanical hard disk drive (HDD)

·         "no moving parts" solid state drive (SSD)

·         hybrid SSD/HDD drive

Today I'd like to share with you some ways that Condusiv’s Diskeeper helps me stay productive. Trust me--I'm no salesperson. Condusiv isn't paying me to write this article. My goal is to share my experience with you so you have something to think about in terms of disk optimization options for your server and workstation storage.

Diskeeper® or SSDKeeper®?

I've used Diskeeper on my servers and workstations since 2000. How time flies! A few years ago it confused me when Condusiv released SSDkeeper, their SSD optimization tool that works by optimizing your data as its written to disk.

Specifically, my confusion lay in the fact you can't have Diskeeper and SSDkeeper installed on the same machine simultaneously. As you saw, I almost always have a mixture of HDD and SSD drives. What am I losing by installing either Diskeeper or SSDkeeper, but not both?

You lose nothing because Diskeeper and SSDkeeper share most of the same features. Diskeeper like SSDkeeper can optimize solid-state disks using IntelliMemory Caching and IntelliWrite, and SSDkeeper like Diskeeper can optimize magnetic disks using Instant Defrag. Both products can automatically determine the storage type and apply the optimal technology.

Thus, your decision of whether to purchase Diskeeper or SSDkeeper is based on which technology the majority of your disks use, either HDD or SSD.

Allow me to explain what those three product features mean in practice:

·         IntelliMemory®: Uses unallocated system random access memory (RAM) for disk read caching

·         IntelliWrite®: Prevents fragmentation in the first place by writing data sequentially to your hard drive as its

created

·         Instant Defrag™: Uses a Windows service to perform "just in time" disk defragmentation

In Diskeeper, click Settings > System > Basic to verify you're taking advantage of these features. I show you the interface in Figure 1.

 

Figure 1. Diskeeper settings.

What about external drives?

In Condusiv's Top FAQs document you'll note that Diskeeper no longer supports external drives. Their justification for this decision is that their customers generally do not use external USB drives for high performance, input/output (I/O) intensive applications.

If you want to run optimization on external drives, you can do that graphically with the Optimize Drives Windows 10 utility, or you can run defrag.exe from an elevated command prompt.

For example, here I am running a fragmentation analysis on my H: volume, an external SATA HDD:

PS C:\users\tim> defrag H: /A

Microsoft Drive Optimizer

Copyright (c) Microsoft Corp.

Invoking analysis on TWARNER1 (H:)...

The operation completed successfully.

Post Defragmentation Report:

         Volume Information:

                Volume size                 = 1.81 TB

                Free space                  = 1.44 TB

                Total fragmented space      = 0%

                Largest free space size     = 1.43 TB

         Note: File fragments larger than 64MB are not included in the fragmentation statistics.

         You do not need to defragment this volume.

PS C:\users\tim>              

Let's look at the numbers!

All Condusiv products make it simple to perform benchmark analyses and run progress reports, and Diskeeper is no exception to this rule. Look at Figure 2--since I rebuilt my Windows 10 workstation and installed Diskeeper in July 2018, I've saved over 20 days of storage I/O time!

Figure 2. Diskeeper dashboard.

Those impressive I/O numbers don't strain credulity when you remember that Diskeeper aggregates I/O values across all my fixed drives, not only one. This time saving is the chief benefit Diskeeper gives me as a working IT professional. The tool gives me back seconds that otherwise I'd spend waiting on disk operations to complete; I then can use that time for productive work instead.

Even more to the point, Diskeeper does this work for me in the background, without my having to remember to run or even schedule defragmentation and optimization jobs. I'm a huge fan of Diskeeper, and I hope you will be.

Recommendation

Condusiv offers a free 30-day trial that you can download and see how much time it can save you:

Diskeeper 30-day trial

SSDkeeper 30-day trial

Note: If you have a virtual environment, you can download a 30-day trial of Condusiv’s V-locity (you can also see my review of V-locity 7).

 

Timothy Warner is a Microsoft Most Valuable Professional (MVP) in Cloud and Datacenter Management who is based in Nashville, TN. His professional specialties include Microsoft Azure, cross-platform PowerShell, and all things Windows Server-related. You can reach Tim via Twitter (@TechTrainerTim), LinkedIn or his website, techtrainertim.com.

  

Causes and Solutions for Latency

$
0
0

Sometimes the slowdown of a Windows server occurs because the device or its operating system is outdated. Other times, the slowdown is due to physical constraints on the retrieval, processing, or transmitting of data. There are other causes as we will cover. In any case, the delay between when a command is made and a response is received is referred to as "latency."

Latency is a measure of time. For example, the latency of a command might be 0.02 seconds. To humans, this seems extraordinarily fast. However, computer processors can execute billions of instructions per second. This means that latency of a few millionths of a second can cause visible delays in the operation of a computer or server.

To figure out how to improve latency, you must identify the source of any latency issues. There are many possible sources of latency and, for each one, there are high latency fixes. Here are two possible causes of latency as well as a brief explanation for how to improve latency. In this case, I/O latency where the computer process is waiting for the I/O to complete, so it can process the data of that I/O, is a   waste of your computer processing power.

Data Fragments

Logical data fragments occur when files are written, deleted, and rewritten to a hard drive or solid-state drive.

When files are deleted from a drive, the files actually still exist on the drive. However, the logical address on the Windows operating file system for those files is freed up for use. This means that "deleted" files remain on the logical drive until another file is written over it by reusing the address. (This also explains why it is possible to recover lost files). 

When an address is re-used, the likelihood that the new file is exactly the same length as the "deleted" file is remote. As a result, little chunks or fragments of data remaining from the "deleted" file remain on the logical drive. As a logical drive fills up, new files are sometimes broken up to fit into the available segments. At its worst, a logical fragmented drive contains both old fragments left over from deleted files (free space fragments) and new fragments that were intentionally created (data file fragments).

Logical data fragments can be a significant source of latency in a computer or server. Storing to, and retrieving from, a fragmented logical drive introduces additional steps in searching for and reassembling files around the fragments. For example, rather than reading a file in one or two I/Os, fragmentation can require hundreds, even thousands of I/Os to read or write that same data.

One way for how to improve latency from these logical data fragments is to defragment the logical drive by collecting data fragments and making them contiguous. The main disadvantages of defragmenting are that it must be repeated periodically because the logical drive will inevitably fragment again and also defragmenting SSDs can cause them to wear out prematurely.

A better method for how to improve latency from disk fragments is to prevent the logical disk from becoming fragmented. Diskeeper® 18 manages writes so that large, contiguous segments are kept together from the very start, thereby preventing the fragments from developing in the first place.

Limited Resources

No matter how "fast" the components of a computer are, they are still finite and tasks must be scheduled and performed in order. Certain tasks must be put off while more urgent tasks are executed. Although the latency in scheduling is often so short that it is unnoticeable, there will be times when limited resources cause enough of a delay that it hampers the computer or server.

For example, two specifications that are commonly used to define the speed of a computer are processor clock speed and instructions per cycle. Although these numbers climb steadily as technology advances, there will always be situations where the processor has too many tasks to execute and must delay some of them to get them all done.

Similarly, data buses and RAM have a particular speed. This speed limits the frequency with which data can be moved to the processor. These kinds of Input/output performance delays can reduce a system’s capacity by more than 50%.

One way to address latency is a method used by Diskeeper® 18. In this method, idle available DRAM is used to cache hot reads. By caching, it eliminates having to travel all the way to the storage infrastructure to read the data; and remember that DRAM can be 10x-15x faster than SSDs and even many factors more than HDDs. This allows faster retrieval of data; in fact, Windows systems can run faster than when new.

Reducing latency is mostly a matter of identifying the source of latencies and addressing them. By being proactive and preventing fragmentation before it happens and by caching hot reads using idle & available DRAM, Diskeeper® 18 makes Windows computers faster and more reliable.

 

How To Get The Most Out Of Your Flash Storage Or Move To Cloud

$
0
0

You just went out and upgraded your storage to all-flash.  Or, maybe you have moved your systems to the cloud where you can choose the SLA to get the performance you want.  We can provide you with a secret weapon that will make you continue to look like a hero and get the real performance you made these choices for.


Let’s start with why you made those choices to start with.  Why did you make the change?  Why not just upgrade the aging storage to a new-gen HDD or hybrid storage subsystem?  After all, if you’re like most of us, you’re still experiencing explosive growth in data and HDDs continue to be more cost-effective for whatever data requirements you’re going to need in the future.

 

If you went to all-flash, perhaps it was the decreasing cost that made it more approachable from a budgetary point of view and the obvious gain in speed made it easy to justify.

 

If it was a move to the cloud, there may have been many reasons including:

   •  Not having to maintain the infrastructure anymore

   •  More flexibility to quickly add additional resources as needed

   •  Ability to pay for the SLA you need to match application needs to end user performance

Good choices.  So, what can Diskeeper® and V-locity® do to help make these even better choices to provide the expected performance results at peak times when needed most?

 

Let’s start with a brief conversation about I/O bottlenecks.

 

If you have an All-Flash Array, you still have a network connection between your system and your storage array.  If you have local flash storage, system memory is still faster, but your data size requirements make it a limited resource. 

 

If you’re on the cloud, you’re still competing for resources.  And, at peak times, you’ll have slows due to resource contention.  Plus, you will experience issues because of File System and Operating System overhead. 

 

File fragmentation creates significant increases in the number of I/Os that have to be requested for your applications to process the data they need to.  Free Space fragmentation adds overhead to allocating file space and makes file fragmentation far more likely.

 

Then there is all the I/Os that Windows creates that are not directly related to your application’s data access.  And then you have utilities to deal with anti-malware, data recovery, etc....  And trust me, there are LOTs of those.

 

At Condusiv, we’ve watched the dramatic changes in storage and data for a long time.  The one constant we have seen is that your needs will always accelerate past the current generation of technologies you use.  We also handle the issues that aren’t handled by the next generation of hardware.  Let’s take just a minute and talk about that.

 

What about all the I/O overhead created in the background by Windows or your anti-malware and other system utility software packages?  What about the I/Os that your application doesn’t bother to optimize because it isn’t the primary data being accessed?  Those I/Os account for a LOT of I/O bandwidth.  We refer to those as “noisy” I/Os.  They are necessary, but not the data your application is actually trying to process.  And, what about all the I/Os to the storage subsystem from other compute nodes?  We refer to that problem as the I/O Blender Effect.

 

 

Our RAM caching technologies are highly optimized to use a small amount of RAM resources to eliminate the maximum amount of I/O overhead.  It does it dynamically so that when you need RAM the most, we will free it up for your needs.  Then, when RAM is available, we will use it to remove the I/Os causing the most overhead.  A small amount of free RAM will go a long way towards reducing the I/O overhead problem.  That’s because our caching algorithms look at how to eliminate the most I/O overhead effectively.  We don’t use LIFO or FIFO algorithms hoping to eliminate I/Os.  Our algorithm uses empirical data, in real-time to guarantee maximum I/O overhead elimination while using minimal resources.

 

Defragmenting all your files that are fragmented is not reasonable due to data explosion.  Plus, you didn’t spend your money to have our software use it to make it look pretty.  We knew this long before you ever did.  As a result, we created technologies to prevent fragmentation in the first place.  And, we created technologies to empirically locate just those files that are causing extra overhead due to fragmentation so we can address those files only and therefore get the most bang for the buck in terms of I/O density.

 

Between our caching and file optimization technologies, we will make sure you keep getting the performance you hoped for when you need it the most.  And, of course, you will continue to be the superstar to the end users and your boss.  I call that a Win-Win. 😊

 

Finally, we continue to look in our crystal ball for the next set of I/O performance issues that will be coming up that others aren’t thinking before they appear in the first place.  You can rest assured we will have solutions for those problems long before you ever experience them.

 

##

 

Additional and related resources:

 

Windows is still Windows Whether in the Cloud, on Hyperconverged or All-flash

Why Faster Storage May NOT Fix It

How to make NVMe storage even faster

Trial Downloads

 

Top 10 Webinar Questions – Our Experts Get Technical

$
0
0

As we enter the new year and reflect on the 25 live webinars that we held in 2019, we are thrilled with the level of interaction and thought we’d take a look back at some of the great questions asked during the lively Q&A sessions. Here are the top questions and the responses that our technical experts gave.

 

We run a Windows VM on a Microsoft Azure, is your product still applicable?

Yes. Whether the Windows system is a physical system or a virtual system, it still runs into the I/O tax and the I/O blender effect.  Both which will degrade the system performance.  Whether the system is on premise or in the cloud, V-locity® can optimize and improve performance.

 

If a server is dedicated to running multiple SQL jobs for different applications, would you recommend installing V-locity?

Yes, we would definitely recommend using V-locity. However, the software is not specific to SQL instances, as it looks to improve the I/O performance on any system. SQL just happens to be a sweet spot because of how I/O intensive it is.

 

Will V-locity/Diskeeper® help with the performance of my backup jobs?

We have a lot of customers that buy the software to increase their backup performance because their backup windows are going past the time they have allotted to do the backup. We’ve had some great success stories of customers that have reduced their backup windows by putting our software on their system.

 

Does the software work in physical environments?

Yes, although we are showing how the software provides benefits in a virtual environment, the same performance gains can be had on physical systems. That same I/O tax and blender effect that degrade performance on virtual systems can also happen on physical systems. The I/O tax occurs on any Windows systems when nice, sequential I/O is broken up into less efficient smaller, random I/O, which can also apply to physical workstation environments. The Blender Effect that we see when all of those small, random I/Os from multiple VMs have to get sorted by the Hypervisor and can occur on physical environments too. For example, when multiple physical systems are read/writing to different LUNs on the same SAN.

 

What about the safety of this caching? If the system crashes, how safe is my data?

The software uses read-only caching, as data integrity is our #1 priority when we develop these products. With read-only caching, the data that’s in our cache is already in your storage. So, if the system unexpectedly goes down (i.e. Power outage), it’s okay because that data in cache is already on your storage and completely safe.

 

How does your read cache differ from SQL that has its own data cache?

SQL is not too smart or efficient with how it uses your valuable available memory. It tries to load up all of its databases as much as it can to the available memory that is there, even though some of the databases or parts of those database aren’t even being accessed. Most of the time, your databases are much larger than the amount of memory you have so it can never fit everything. Our software is smarter in that it can determine the best blocks of data to optimize in order to get the best performance gains. Additionally, the software will also be caching other noisy I/Os from the system that can improve performance on the SQL server.

 

In a Virtual environment, does the software get installed on the Host or the VMs?

The software gets installed on the actual VMs that are running Windows, because that’s where the I/Os are getting created by the applications and the best place to start optimizing. Now, that doesn’t necessarily mean that it has to get installed on all of the VMs on a host. You can put it just on the VMs that are getting hit the most with I/O activity, but we’ve seen the best performance gains if it gets installed on all of the VMs on that host because if you only optimize one VM, you still have the other VMs causing performance degradation issues on that same network. By putting the software on all of them, you’ll get optimal performance all around.

 

Is your product needed if I have SSDs as my storage back-end?

Our patented I/O reduction solutions are very relevant in an SSD environment. By reducing random write I/Os to back end SSD’s, we also help mitigate and reduce write amplification issues. We keep SSDs running at “like new” performance levels. And, although SSDs are much faster than HDDs, the DRAM used in the product’s intelligent caching feature is 10x-15x faster than SSDs. We have many published customer use cases showing the benefits of our products on SSD based systems. Many of our customers have experienced 50, 100, even 300% performance gains in an all flash/SSD environment!

 

Do we need to increase our RAM capacity to utilize your software?

That is one of the unique Set-It-and-Forget-It features of this product. The software will just use the available memory that’s not being used at the time and will give it back if the system or user applications need it. If there’s no available memory on the system, you just won’t be able to take advantage of the caching. So, if there’s not enough available RAM, we do recommend adding some to take advantage of the caching, but of course you’re always going to get the advantage of all the other technology if you can’t add RAM. Best practice is to reserve 4-8GB at a minimum.

 

What teams can benefit most from the software? The SQL Server Team/Network Team/Applications Development Team?

The software can really benefit everyone. SQL Servers are usually very I/O intensive, so performance can be improved because we’re reducing I/O in the environment, but any system/applications (like File Server or Exchange Server) that are I/O intensive will benefit. The whole throughput and network team can benefit from it because it decreases the meta traffic that has to go through the network to storage, so it increases bandwidth for others. Because the software also improves and reduces I/O across all Microsoft applications, it really can benefit everyone in the environment.

 

There you have it – our top 10 questions asked during our informative webinars! Have more questions? Check out our FAQs, ask us in the comments below or send an email to info@condusiv.com.

6 Best Practices to Improve SQL Query Performance

$
0
0

 

MS SQL Server is the world’s leading RDMS and has plentiful benefits and features that empower its efficient operation. As with any such robust platform, however—especially one which has matured as SQL Server has—there have been best practices evolved that allow for its best performance.

For any company utilizing it, Microsoft SQL Server is central to a company’s management and storage of information. In business, time is money, so any company that relies on information to function (and in this digital age that would be pretty much all of them) needs access to that information as rapidly as possible. In that information is often obtained from databases through queries, optimizing SQL query performance is vital.

As 7 out of 10 Condusiv customers come to us because they were experiencing SQL performance issues and user complaints, we have amassed a great deal of experience on this topic and would like to share. We have done extensive work in eliminating I/O inefficiencies and streamlining I/O for optimum performance. It is especially important in SQL to reduce the number of random and “noisy” I/Os as they can be quite problematic. We will cover this as well as some additional best practices. Some of these practices may be more time-consuming, and may even require a SQL consultant, and some are easy to solve.

SQL Server, and frankly all relational databases, are all high I/O utilization systems. They’re going to do a lot of workload against the storage array. Understanding their I/O patterns are important, and more important in virtual environments.— Joey D’Antoni, Senior Architect and SQL Server MVP

Here are 6 best practices for the improvement of SQL query performance.

1. Tune queries

A great feature of the SQL language is that it is fairly easy to learn and to use in creating commands. Not all database functions are efficient, however. Two queries, while they might appear similar, could vary when it comes to execution time. The difference could be the way they are structured; this is a very involved subject and open to some debate. It’s best to engage a SQL consultant or expert and allow them to assist you with structuring your queries.

Aside from the query structure, there are some great guidelines to follow in defining business requirements before beginning.

   Identify relevant stakeholders

   Focus on business outcomes

   Ask the right questions to develop good query requirements

   Create very specific requirements and confirm them with stakeholders

2. Add memory

Adding memory will nearly always assist in SQL Server query performance, as SQL Server uses memory in several ways. These include:

   the buffer cache

   plan cache, where query plans are stored for re-use

   buffer pool, in which are stored recently written-to pages

   sorting and matching data, which all takes place in memory

Some queries require lots of memory for joins, sorts, and other operations. All of these operations require memory, and the more data you aggregate and query, the more memory each query may require.

[Tips for current Condusiv V-locity® users: (1) Provision an additional 4-16GB of memory to the SQL Server if you have additional memory to give. (2) Cap MS-SQL memory usage, leaving the additional memory for the OS and our software. Note - Condusiv software will leverage whatever is unused by the OS (3) If no additional memory to add, cap SQL memory usage leaving 8GB for the OS and our software Note – This may not achieve 2X gains but will likely boost performance 30-50% as SQL is not always efficient with its memory usage]

3. Perform index maintenance

Indexes are a key resource to SQL Server database performance gain. The downside, however, is that database indexes degrade over time.

Part of this performance degradation comes about through something that many system administrators will be familiar with: fragmentation. Fragmentation on a storage drive means data stored non-contiguously, so that the system has to search through thousands of fragments, meaning extra I/Os, to retrieve data. It is a similar situation with a database index.

There are two types of database index fragmentation:

   Internal fragmentation, which occurs when more than one data page is created, neither of which is full. Performance is affected because SQL Server must cache two full pages including empty yet allocated space.

   External fragmentation, which means pages that are out of order.

When an index is created, all pages are sequential, and rows are sequential across the pages. But as data is manipulated and added, pages are split, new pages are added, and tables become fragmented. This ultimately results in index fragmentation.

There are numerous measures to take in restoring an index so that all data is sequential again. One is to rebuild the index, which will result in a brand-new SQL index. Another is to reorganize the index, which will fix the physical order and compact pages.

There are other measures you can take as well, such as finding and removing unused indexes, detecting and creating missing indexes, and rebuilding or reorganizing indexes weekly.

It is recommended you do not perform such measures unless you are a DBA and/or have a thorough understanding of SQL Server.

4. Add extra spindles or flash drives

Like the increase of memory, increasing storage capacity can be beneficial.

Adding an SSD, the most expensive option, can provide the most benefit as there are no moving parts. The less expensive option is to add spindles. Both of these options can help with decreasing latency times, but it does not get rid of the extra I/Os occurring due to fragmentation. It is not really solving the root cause of I/O inefficiencies.

5. Optimize the I/O subsystem 

Optimizing the I/O subsystem is highly important in optimizing SQL Server performance. When configuring a new server, or when adding or modifying the disk configuration of an existing system, determining the capacity of the I/O subsystem before deploying SQL Server is good practice.

There are three primary metrics that are most important when it comes to measuring I/O subsystem performance:

   Latency, which is the time it takes an I/O to complete.

   I/O operations per second, which is directly related to latency.

   Sequential throughput, which is the rate at which you can transfer data.

You can utilize an I/O stress tool to validate performance and ensure that the system is tuned optimally for SQL Server before deployment. This will help identify hardware or I/O configuration-related issues. One such tool is Microsoft DiskSpd, which provides the functionality needed to generate a wide variety of disk request patterns. These can be very helpful in the diagnosis and analysis of I/O performance issues.

You can download DiskSpd.exe here.

Another tool is Condusiv’s I/O Assessment Tool for identifying which systems suffer I/O issues and which systems do not. It identifies and ranks systems with the most I/O issues and displays what those issues are across 11 different key performance metrics by identifying performance deviations when workload is the heaviest.

You can download Condusiv’s I/O Assessment Tool here

6. Use V-locity I/O reduction software

Reduce the number of I/Os that you are doing. Because remember, the fastest read from disk you can do is one you don’t do at all. So, if you don’t have to do a read, that’s all the better.—Joey D’Antoni, Senior Architect and SQL Server MVP

Reducing and streamlining small, random, fractured I/O will speed up slow SQL queries, reports and missed SLAs. V-locity makes this easy to solve. 

Many companies have utilized virtualization to greatly increase server efficiency for SQL Server. While increasing efficiency, at the same time virtualization on Windows systems has a downside. Virtualization itself adds complexity to the data path by mixing and randomizing I/O streams—something known as the “I/O blender effect.” On top of that, when Windows is abstracted from the physical layer, it additionally utilizes very small random read and writes which are less efficient that larger contiguous reads and writes. SQL Server performance is penalized not once, but twice. The net effect is I/O characteristics more fractured and random than they need to be.

The result is that typically systems process workloads about 50 percent slower than they should be, simply because a great deal more I/O is required.

While hardware can help performance problems as covered above, it is only a temporary fix as the cause of Windows I/O inefficiencies is not being addressed. Many sites have discovered that V-locity I/O reduction software, employed on any Windows server (virtual or physical) is a quicker and far more cost-effective solution. V-locity replaces tiny writes with large, clean, contiguous writes so that more payload is delivered with every I/O operation. I/O to storage is further reduced by establishing a tier 0 caching strategy which automatically serves hot reads from idle, otherwise unused memory. The software adjusts itself, moment to moment, to only use unused memory.

The use of V-locity can improve performance by 50 percent or more. Many sites see twice as much improvement or more, depending on the amount of DRAM available. Condusiv Technologies, developer of V-locity, actually provides a money-back guarantee that V-locity will solve the toughest application performance challenges on I/O intensive systems such as SQL Server.

You can download a free 30-day trial of V-locity I/O reduction software here. 


I/Os Are Not Created Equal – Random I/O versus Sequential I/O

$
0
0

To demonstrate the performance difference of I/O patterns, put yourself in a Veterinarian’s office where all the data is still stored on paper in file cabinets. For a single animal (billing, payments, medication, visits, procedures…), it is all stored in different folders and placed in different cabinets according to specific categories, like Billing and Payments.  To get all the data for that one animal, you may have to retrieve 10 different folders from 10 different cabinets. Wouldn’t it be easier if all that data was in a single file so you can retrieve it in one single step? This is basically the difference between Random I/O and Sequential I/O.

 

Accessing data randomly is much slower and less efficient than accessing it sequentially.  Simply, it is faster to write/read the same data with a single sequential I/O rather than multiple, say 25, smaller random I/Os. For one, the operating system must process all those extra I/Os rather than just the single one, a substantial overhead.  Then the storage device also has to process all those multiple I/Os too. With Hard Disk Drives (HDDs), the penalty is worse because the extra disk head movement to gather the data from all those random I/Os is very time-consuming. With Solid State Drives(SSDs), there is not the penalty of the disk head movement, just the penalty of the storage device having to process the multiple I/Os rather than a single one. In fact, storage manufacturers usually provide two benchmarks for their devices – Random I/O and Sequential I/O performance. You will notice that the Sequential I/Os always outperform the Random I/Os. This is true for both HDDs and SSDs.

 

Sequential I/O always outperforms Random I/O on hard disk drives or SSDs.

 

So, enforcing Sequential I/Os to occur will get you optimal I/O performance, both at the Operating System level (less I/Os to process) and at the Storage level. Unfortunately, the Windows File system tends to cause Random I/Os to occur when data is written out, then subsequentially when that same data is read back in. The reason for this is when files are created or extended, the Windows File System does not know how large those creations/extensions are going to be, so it does not know what to look for in finding the best logical allocation to place that data so it can be written in one logical location (one I/O). It may just find the next available allocation which may not be large enough, so it has to find another allocation (another I/O) and keep doing so until all the data is written out.  The IntelliWrite® technology in Diskeeper® and V-locity® solves this by providing intelligence back to the File System so it can find the best allocation which helps enforce Sequential I/Os to occur rather than Random I/Os and enforcing optimal I/O performance.


Myriad of Windows Performance Problems Traced to a Single Source

$
0
0

 

 

Believe it or not, 12 substantial Windows performance issues that can cause the most frustration and chew up valuable time can be directly traced to a single source. In this article, we’re going to show you how.

First, let’s briefly touch on and describe each issue.

1. Slow Application Performance

This is an issue that many of us are familiar with. A company is running a large application such as EMR/EHR or ERP that the entire enterprise depends on, and users end up waiting endlessly for data. Or a sales team is operating on a CRM application, and speaking with prospects; waiting for data in such a scenario is not helpful to getting a sale made. It could be an LMS, used for the vital administration of educational programs. Other applications such as SharePoint, MS Exchange, VDI, POS and even legacy and proprietary apps all suffer from this same malady. This makes your phone ring or support tickets blow-up with user complaints.

2. Application Crashes

This one is especially annoying, and just brings everything to a dead stop. As an example, a customer service representative is on the phone with an important customer, and while looking at the customer’s data, the screen freezes up. The application has crashed. And, oh yes, this will affect others accessing that application, too.

When this happens, often a user will yell out, “What’s wrong with the computer?!” But of course, it’s not the computer. We’ll get to that at the end.

3. Missing SLAs

SLAs are the delivery backbone of many companies. Service quality and availability are service aspects written into contracts, and if they’re not met, it not only means lost income, it can also mean lost business and clients. This is especially true today in a SaaS environment, in which a client can simply pull the plug and go to another provider.

A primary cause of missed SLAs is slow performance. Yet again, it traces to the same source as these others.

4. Slow Data Transfer Rates

There are many reasons for data transfer, including backups to other locations, and importing data to new locations. But they all boil down to transferring a large amount of data from one place to another. When transfer rates are slow, it means waiting. And waiting. And waiting. This eats up system as well as staff resources.

Slow data transfer rates are traceable to this same source.

5. SQL Query Timeouts and Latency

Today, many enterprises survive on data, which means they’re also living and dying on database queries. When a query is originated, the process through which the query was made will be waiting until the query is satisfied. The longer the wait (latency), the more cost is eaten up in terms of time and resources.

If a timeout occurs, that means that the query must be started again. This, of course, can mean a serious delay.

6. SQL Deadlocks

This phenomenon occurs when two or more processes are waiting for the same resource. Each process is then waiting for the other process to complete before continuing. On the user end, SQL deadlocks produce the same result as timeouts: endless waiting.

7. SQL Server 15-Second Warnings

An I/O request should complete within milliseconds. The 15-second warning that SQL Server has been waiting for longer than 15 seconds for an I/O request to complete indicates a serious performance problem—once again traceable to the same issue.

8. You Upgrade Hardware…but Performance Still Slow

Many seem to think that the best (and maybe the only) way to solve performance problems is to upgrade hardware. But what happens when you upgrade hardware, and performance is still sluggish?

This is simply a very expensive way to indicate that you have “solved” the wrong problem. Yes, performance was an issue, but the reason behind it was not hardware related.

Yes, you guessed it: the cause is the same as all of these other problems.

9. Slow SSD Read/write Speed

Following along on the above scenario, many companies install SSDs to improve performance—and given the substantial performance difference between SSDs and HDDs, that performance difference should be drastic.

But what happens when the read/write speed to SSDs is still slow? Yes, you’re still suffering from the same problem.

10. Storage Performance Problems

Storage has become very advanced today, with sophisticated solutions designed to improve storage performance. But just as with the issues described above, it often happens that the performance problems you’re experiencing with storage are not due to the hardware…but to the same cause as the rest of these issues.

11. Slow Server Performance

This is the generally sluggish performance phenomenon, the causes of which can be tough to trace down. For that reason, many don’t try—they just decide that hardware must be upgraded: new servers, new storage, perhaps even a new network.

In that slow server performance is most often rooted in the same cause as all of these other issues, though, you might want to give that a try first. Or at the very least, you can buy much less hardware and save that budget!

12. VM Density and Consolidation Issues

VM consolidation is the action of consolidating several VMs into one physical server. Virtual machine density means the quantity of VMs being run from a single physical host; the higher the VM density is, the more efficient the system may be.

Both VM consolidation and VM density contain the same inherent performance problem as each of these other scenarios and may be preventing you from loading more VMs onto a single host.

The Basic Problem

All of these issues that cost you peace of mind can be traced back storage I/O efficiencies.

As great as virtualization has been great for server efficiency, one of the biggest downsides to virtualization is that it adds complexity to the data path – otherwise known as the I/O blender effect that mixes and randomizes I/O streams.

There are 2 severe I/O inefficiencies causing this.

First, is caused by the behavior of the Windows file system. It will tend to break up writes into separate storage I/Os and send each I/O packet down to the storage layer separately and this causes I/O characteristics that are much smaller, more fractured, more random than they need to be – this along with the I/O Blender effect noted above is the perfect trifecta for bad storage performance.

This is a “death by a thousand cuts” scenario that is like pouring molasses on your systems – everything is running, but not running nearly as fast as it could.

You could opt to throw more hardware at the problem, but this is expensive and disruptive and can be premature – it is much better to tune what you already own to get the performance you should be.

Second, is storage I/O contention. This happens when you have multiple systems all sharing the same storage resource.

Windows is breaking up that I/O profile into a much smaller, more fractured, more random I/O profile than it needs to be. If you just clean that up on one VM then all of the data from that one VM to the host is all streamlined, but then you have all the data from neighbor VMs that are still noisy and causing contention.

 

 

 

As you can see, your performance is not only penalized once, but twice by storage I/O efficiencies. This means systems process workloads about 50% slower than they should on the typical Windows server because far more I/O is needed to process any given workload. This has been found to be to be the cause of a host of Windows performance problems such as those mentioned earlier.

The Solution

In order to achieve an ideal performance level from your hardware infrastructure, you want large, clean, and contiguous read and write I/Os from all sources, eliminating the I/O blender effect.

Larger, cleaner, sequential I/Os result in fewer I/Os to process and thus faster data transfer rates for peak performance. In such a case, you can have 1G of data, but instead of transferring it in 100,000 I/Os, you can accomplish it in 70,000, or less.

The next factor is reading and writing I/Os sequentially, instead of randomly. When dealing with storage, you’ll find that sequential I/Os always out-perform random I/Os on hard disk drives, SSDs and flash storage.

These 3 factors work together to transform the nature of the I/O to improve performance:
– Larger I/O
– Sequential I/O
– Less I/O

The overall effect is that the OS workload is reduced, because there are fewer I/Os to process, and they are occurring sequentially.

V-locity

This is the solution brought into effect by the V-locity® software:

• Fewer I/Os, because they are larger
• Sequential I/Os
• Read I/O served from memory

V-locity accomplishes these improvements through proprietary technology that optimizes and streamlines with both reads and writes.

Write performance: IntelliWrite® patented technology eliminates small, fractured I/Os caused by Windows splitting files into multiple write operations. V-locity enforces large, clean, contiguous writes for more payload with every I/O operation.

Read performance: IntelliMemory® patented technology reduces read I/Os from storage by caching hot data server-side. Reads are cached right at VM level from otherwise-idle, available DRAM. Not only does this enormously decrease the I/O latency time, but also decreases the I/O traffic to the storage unit, thus freeing up the storage bandwidth for other work.

Because of these substantial improvements, V-locity is able to regularly provide 30 to 40 percent faster data transfer speeds, eliminating a myriad of Windows performance problems.

Are your servers good candidates for V-locity? Find out quickly and easily without investing a lot of time – Download the FREE Condusiv I/O Assessment Tool. This free tool will:


• Analyze data across 11 performance metrics
• Easily identify systems suffering from performance issues
• Graphs display averages and peaks for each hour 

 

 

  

V-locity improves the performance and reliability of Windows systems.

Downloads | Purchase | Case Studies

How To Find Out If Your Servers Have an I/O Performance Problem

$
0
0

IT pros know all too well the pain and frustration due to performance problems such as users getting disconnected and complaining, SQL reports or queries taking forever or timing out, annoyingly slow applications causing users to wait, losing productivity and complaining, backups failing to complete in the allotted window or even having to constantly reboot the servers to restore performance for a bit. Troubleshooting these issues can cost you many late nights, lost weekends and even missing important events.

These issues are commonly traced back storage I/O efficiencies. No matter the underlying storage, the Windows file system will tend to break up writes into separate storage I/Os and send each I/O packet down to the storage layer separately causing I/O characteristics that are much smaller, more fractured, more random than they need to be. In a virtual environment, the I/O blender effect comes in to play mixing and randomizing I/O streams coming from the different virtual machines on that hypervisor, causing I/O contention. This means systems process workloads about 50% slower than they should on the typical Windows server because far more I/O is needed to process any given workload. This has been found to be to be the cause of a host of Windows performance problems.

So how do you know if your servers haven fallen prey to this condition? You can stop guessing and find out for sure. It’s easy, run the FREE Condusiv® I/O Assessment Tool.

Condusiv’s I/O Assessment Tool - FREE (yes FREE)

The Condusiv I/O Assessment tool is designed to provide you the ability to see how well your storage is performing. It gathers numerous storage performance metrics that Windows automatically collects over an extended period of time – we recommend 5 days. It then performs numerous statistical analyzes looking for potential problems throughout the period of time the monitoring took place. It even looks for potential areas of cross node conflicts. By correlating across multiple systems, it can infer that nodes are causing performance issues for each other during the overlapping periods of time. It then displays a number of metrics that will help you understand where potential bottlenecks might be.

The tool has 4 basic phases:

•Setup

•Data Collection

•Analysis

•Reporting

For full technical details on these phases are available here.

Identifying the Source of Performance Issues

Once the tool has run it will have identified and ranked the systems with the most I/O issues and display what those issues are across 11 different key performance metrics by identifying performance deviations when workload is the heaviest.

The reporting screen will display three main sections:

•Summary of systems in data collection

•Series of individual storage performance metrics

•Conclusions about your storage performance for selected systems

In the summary section, there is a grid containing the list of systems that are available from the data collection you just collected data on or imported. The grid also contains totals for each system for the various metrics for the entire time of the data collection. The list is sorted from systems that have potential storage performance issues to those that do not appear to have storage performance issues. The systems that have storage performance issues are highlighted in red. The systems that might have storage performance issues are in yellow. The systems that do not appear to have storage performance issues are in green. By default, the systems that have storage performance issues are selected for the report. You can select all the systems, or any set of systems including a single system, for reporting on.

 

 Performance-Assessment-Tool-Systems

 

Once you have selected some systems to report on and asked to display the report, you can expand 11 key performance:

Workload in Gigabytes:

This is a measure of the number of Gigabytes of data that was processed by your storage. It is represented in 5-minute time slices. The peaks indicate when the storage is being used the most and can show you periods where you can offload some work to provide greater performance during peak load periods. The valleys indicate periods of lower storage utilization.

 

 IO-Performance-Assessment-Tool-Workload

 

I/O Response Time:

The I/O Response Time is the average amount of time in milliseconds (1000ths of a second) that your storage system takes to process any one I/O. The higher the I/O Response Time, the worse the storage performance. The peaks indicate possible storage performance bottlenecks.

 

 IO-Performance-Assessment-Tool-IO-Response-Time

 

Queue Depth:

Queue Depth represents the number of I/Os that are having to wait because the storage is busy processing other I/O requests. The larger the value, the more the storage system is struggling to keep up with your need to access data. The higher the queue depth, the worse the storage performance. It directly correlates to inefficient storage performance. 

 

IO-Performance-Assessment-Tool-Queue-Depth 

Split I/Os:

Split I/Os are extra I/O operations that have to be performed because the file system has broken up a file into multiple fragments on the disk. To have a truly dynamic file system with the ability for files to be different sizes, easily expandable, and accessible using different sized I/Os, file systems have to break files up into multiple pieces. Since the size of volumes has gotten much larger and the number of files on a volume has also exploded, fragmentation has become a more severe problem. However, not all file fragments cause performance problems. Sometimes I/Os are done in such a manner that they are aligned with the file allocations and therefore always fit within a file’s fragments. Most of the time, however, that is simply not the case. When it isn’t the case, a single I/O to process data for an application may have to be split up by the file system into multiple I/Os. Thus, the term – Split I/O. When the free space gets severely fragmented, this becomes even more likely and accelerates the rate of fragmentation and therefore corresponding Split I/Os. Split I/Os are bad for storage performance. Preventing and eliminating Split I/Os is one of the easiest ways to make a big difference in improving storage performance.

See I/Os Are Not Created Equal – Random I/O versus Sequential I/O for more detail. 

IO-Performance-Assessment-Tool-Split-IOs

 

IOPS:

IOPS is the average number of I/O Operations per second that your storage system is being asked to perform. The higher the IOPS, the more work that is being done.

 

IO-Performance-Assessment-Tool-IOPS 

I/O Size:

I/O Size is the average size (in kilobytes) of I/Os you are performing to your storage system. It is an indication of how efficient your systems are processing data. Generally, the smaller the I/O size, the more inefficient that the data is being processed. Please note that certain applications may just process smaller I/Os. They tend to be exceptions to the rule, however.

 

IO-Performance-Assessment-Tool-IO-Size 

 

I/O Blender Effect Index:

This is a measure of I/Os from multiple systems at the same time that are likely causing performance problems. The problem is caused because of their conflict with I/Os from other systems at the same time. When multiple VMs on a single Hypervisor are sending I/Os to the Hypervisor at the same time, the potential for conflict rears its ugly head. The same is true when multiple systems (physical or virtual) are using shared storage such as SANs. Because this tool will collect data from multiple systems in small, discreet, and overlaid periods of time, it is able to estimate contention. By searching for periods of time where performance appears to be suffering and then checking to see if any other system is having a potential problem during the same time, the tool can determine statistically that this particular period of time is problematic due to cross node interference. The amount of cross node conflict is taken into consideration, thus creating the index.

 

IO-Performance-Assessment-Tool-IO-Blender-Effect 

Seconds per Gigabyte:

This is a measure of how many seconds it would take to process one gigabyte of data through your storage system using the current I/O Response Time and the current I/O Size. Effectively, this tool calculates the number of potential operations per second at the current I/O Response Time rate. It then divides one gigabyte by the product of potential operations per second times the I/O Size. This can vary widely based on I/O contention, size of I/Os, and several other factors. The lower the value, the better the storage performance.

 

IO-Performance-Assessment-Tool-Seconds-per-Gigabyte 

Reads to Writes Ratio:

This is the ratio of reads to writes as a percentage. If you had 5,000 total I/Os and 3,456 were Read (1,544 Writes) the ratio would be 69.12%. It shows the workload characteristics of the environment. In other words, it shows if the application is predominantly Read or Write intensive. Generally, the potential to optimize performance is greater for read intensive applications.

 IO-Performance-Assessment-Tool-Read-Write-Ratio

 

Memory Utilization:

This is a measure of the percentage of memory being used by your system. Some performance problems may be caused by having limited amounts of available memory. High memory utilization may indicate that one of the bottlenecks to storage performance is inadequate memory for your systems and applications to process data. Having adequate free memory can open doors to potential optimization techniques. Sometime just increasing the available memory on a system can make a significant difference in overall performance and storage performance specifically.

 

IO-Performance-Assessment-Tool-Memory-Utilization 

CPU Utilization:

This is a measure of how busy your CPU is as a percentage. This is overall utilization for the entire system, not just per core or socket. The reason this measure matters is that if your CPU utilization is close to 100%, you probably do not have a storage related issue. 

 

 IO-Performance-Assessment-Tool-CPU-Utilization

 

Potential for I/O Performance Optimization:

This measurement looks at a substantial amount of the data collected and determines how likely it is that your I/O performance can be increased via various optimization techniques without having to acquire more or faster hardware. 

Critical, Moderate or Minimal I/O Performance Issues will be noted. 

 

IO-Performance-Assessment-Tool-Conclusion 

 

To find out if your servers have an I/O problem, download the FREE Condusiv I/O Assessment Tool. It’s easy:

1.Download

2.Install

3.Choose your systems to monitor

4.Choose how long to collect data

5.Start Collection

6.Pull up the dashboard after your data is collected and look at the results.

 

Get Started Now

 




Latest Images