Quantcast
Channel: Blogging @Condusiv
Viewing all 59 articles
Browse latest View live

A Deep Dive Into The I/O Performance Dashboard

$
0
0

While most users are familiar with the main Diskeeper®/V-locity®/SSDkeeper™ Dashboard view which focuses on the number of I/Os eliminated and Storage I/O Time Saved, the I/O Performance Dashboard tab takes a deeper look into the performance characteristics of I/O activity.  The data shown here is similar in nature to other Windows performance monitoring utilities and provides a wealth of data on I/O traffic streams. 

By default, the information displayed is from the time the product was installed. You can easily filter this down to a different time frame by clicking on the “Since Installation” picklist and choosing a different time frame such as Last 24 Hours, Last 7 Days, Last 30 Days, Last 60 Days, Last 90 Days, or Last 180 Days.  The data displayed will automatically be updated to reflect the time frame selected.

 

The first section of the display above is labeled as “I/O Performance Metrics” and you will see values that represent Average, Minimum, and Maximum values for I/Os Per Second (IOPS), throughput measured in Megabytes per Second (MB/Sec) and application I/O Latency measured in milliseconds (msecs). Diskeeper, V-locity and SSDkeeper use the Windows high performance system counters to gather this data and it is measured down to the microsecond (1/1,000,000 second).

While most people are familiar with IOPS and throughput expressed in MB/Sec, I will give a short description just to make sure. 

IOPS is the number of I/Os completed in 1 second of time.  This is a measurement of both read and write I/O operations.  MB/Sec is a measurement that reflects the amount of data being worked on and passed through the system.  Taken together they represent speed and throughput efficiency.  One thing I want to point out is that the Latency value shown in the above report is not measured at the storage device, but instead is a much more accurate reflection of I/O response time at an application level.  This is where the rubber meets the road.  Each I/O that passes through the Windows storage driver has a start and completion time stamp.  The difference between these two values measures the real-world elapsed time for how long it takes an I/O to complete and be handed back to the application for further processing.  Measurements at the storage device do not account for network, host, and hypervisor congestion.  Therefore, our Latency value is a much more meaningful value than typical hardware counters for I/O response time or latency.  In this display, we also provide meaningful data on the percentage of I/O traffic- which are reads and which are writes.  This helps to better gauge which of our technologies (IntelliMemory® or IntelliWrite®) is likely to provide the greatest benefit.

The next section of the display measures the “Total Workload” in terms of the amount of data accessed for both reads and writes as well as any data satisfied from cache. 

 

A system which has higher workloads as compared to other systems in your environment are the ones that likely have higher I/O traffic and tend to cause more of the I/O blender effect when connected to a shared SAN storage or virtualized environment and are prime candidates for the extra I/O capacity relief that Diskeeper, V-locity and SSDkeeper provide.

Now moving into the third section of the display labeled as “Memory Usage” we see some measurements that represent the Total Memory in the system and the total amount of I/O data that has been satisfied from the IntelliMemory cache.  The purpose of our patented read caching technology is twofold.  Satisfy from cache the frequently repetitive read data requests and be aware of the small read operations that tend to cause excessive “noise” in the I/O stream to storage and satisfy them from the cache.  So, it’s not uncommon to see the “Data Satisfied from Cache” compared to the “Total Workload” to be a bit lower than other types of caching algorithms.  Storage arrays tend to do quite well when handed large sequential I/O traffic but choke when small random reads and writes are part of the mix.  Eliminating I/O traffic from going to storage is what it’s all about.  The fewer I/Os to storage, the faster and more data your applications will be able to access.

In addition, we show the average, minimum, and maximum values for free memory used by the cache.  For each of these values, the corresponding Total Free Memory in Cache for the system is shown (Total Free Memory is memory used by the cache plus memory reported by the system as free).  The memory values will be displayed in a yellow color font if the size of the cache is being severely restricted due to the current memory demands of other applications and preventing our product from providing maximum I/O benefit.  The memory values will be displayed in red if the Total Memory is less than 3GB.

Read I/O traffic, which is potentially cacheable, can receive an additional benefit by adding more DRAM for the cache and allowing the IntelliMemory caching technology to satisfy a greater amount of that read I/O traffic at the speed of DRAM (10-15 times faster than SSD), offloading it away from the slower back-end storage. This would have the effect of further reducing average storage I/O latency and saving even more storage I/O time.

Additional Note: For machines running SQL Server or Microsoft Exchange, you will likely need to cap the amount of memory that those applications can use (if you haven’t done so already), to prevent them from ‘stealing’ any additional memory that you add to those machines.

It should be noted the IntelliMemory read cache is dynamic and self-learning.  This means you do not need to pre-allocate a fixed amount of memory to the cache or run some pre-assessment tool or discovery utility to determine what should be loaded into cache.  IntelliMemory will only use memory that is otherwise, free, available, or unused memory for its cache and will always leave plenty of memory untouched (1.5GB – 4GB depending on the total system memory) and available for Windows and other applications to use.  As there is a demand for memory, IntelliMemory will release memory from it’s cache and give this memory back to Windows so there will not be a memory shortage.  There is further intelligence with the IntelliMemory caching technology to know in real time precisely what data should be in cache at any moment in time and the relative importance of the entries already in the cache.  The goal is to ensure that the data maintained in the cache results in the maximum benefit possible to reduce Read I/O traffic. 

So, there you have it.  I hope this deeper dive explanation provides better clarity to the benefit and internal workings of Diskeeper, V-locity and SSDkeeper as it relates to I/O performance and memory management.

You can download a free 30-day, fully functioning trial of our software and see the new dashboard here: www.condusiv.com/try


Doing it All: The Internet of Things and the Data Tsunami

$
0
0

“If you’re a CIO today, basically you have no choice. You have to do edge computing and cloud computing, and you have to do them within budgets that don’t allow for wholesale hardware replacement…”

For a while there, it looked like corporate IT resource planning was going to be easy. Organizations would move practically everything to the cloud, lean on their cloud service suppliers to maintain performance, cut back on operating expenses for local computing, and reduce—or at least stabilize—overall cost.

Unfortunately, that prediction didn’t reckon with the Internet of Things (IoT), which, in terms of both size and importance, is exploding.

What’s the “edge?”

It varies. To a telecom, the edge could be a cell phone, or a cell tower. To a manufacturer, it could be a machine on a shop floor. To a hospital, it could be a pacemaker. What’s important is that edge computing allows data to be analyzed in near real time, allowing actions to take place at a speed that would be impossible in a cloud-based environment. 

(Consider, for example, a self-driving car. The onboard optics spot a baby carriage in an upcoming crosswalk. There isn’t time for that information to be sent upstream to a cloud-based application, processed, and an instruction returned before slamming on the brakes.)

Meanwhile, the need for massive data processing and analytics continues to grow, creating a kind of digital arms race between data creation and the ability to store and analyze it. In the life sciences, for instance, it’s estimated that only 5% of the data ever created has been analyzed.

Condusiv® CEO Jim D’Arezzo was interviewed by App Development magazine (which publishes news to 50,000 IT pros) on this very topic, in an article entitled “Edge computing has a need for speed.” Noting that edge computing is predicted to grow at a CAGR of 46% between now and 2022, Jim said, “If you’re a CIO today, basically you have no choice. You have to do edge computing and cloud computing, and you have to do them within budgets that don’t allow for wholesale hardware replacement. For that to happen, your I/O capacity and SQL performance need to be optimized. And, given the realities of edge computing, so do your desktops and laptops.”

At Condusiv, we’ve seen users of our I/O reduction software solutions increase the capability of their storage and servers, including SQL servers, by 30% to 50% or more. In some cases, we’ve seen results as high as 10X initial performance—without the need to purchase a single box of new hardware.

If you’re interested in working with a firm that can reduce your two biggest silent killers of SQL performance, request a demo with an I/O performance specialist now.

If you want to hear why your heaviest workloads are only processing half the throughput they should from VM to storage, view this short video.

Financial Sector Battered by Rising Compliance Costs

$
0
0

Finance is already an outlier in terms of IT costs. The industry devotes 10.5% of total revenue to IT—and on average, each financial industry IT staffer supports only 15.7 users, the fewest of any industry.

All over the world, financial services companies are facing skyrocketing compliance costs. Almost half the respondents to a recent Accenture survey of compliance officers in 13 countries said they expected 10% to 20% increases, and nearly one in five are expecting increases of more than 20%.

Much of this is driven by international banking regulations. At the beginning of this year, the Common Reporting Standard went into effect. An anti-tax-evasion measure signed by 142 countries, the CRS requires financial institutions to provide detailed account information to the home governments of virtually every sizeable depositor.

Just to keep things exciting, the U.S. government hasn’t signed on to CRS; instead we require banks doing business with Americans to comply with the Foreign Account Tax Compliance Act of 2010. Which requires—surprise, surprise—pretty much the same thing as CRS, but reported differently.

And these are just two examples of the compliance burden the financial sector must deal with. Efficiently, and within a budget. In a recent interview by ValueWalk entitled “Compliance Costs Soaring for Financial Institutions,” Condusiv® CEO Jim D’Arezzo said, “Financial firms must find a path to more sustainable compliance costs.”

Speaking to the site’s audience (ValueWalk is a site focused on hedge funds, large asset managers, and value investing) D’Arezzo noted that finance is alreadyan outlier in terms of IT costs. The industry devotes 10.5% of total revenue to IT, more than government, healthcare, retail, or anybody else. It’s also an outlier in terms of IT staff load; on average, each financial industry IT staffer supports only 15.7 users, the fewest of any industry. (Government averages 37.8 users per IT staff employee.)

To ease these difficulties, D’Arezzo recommends that the financial industry consider advanced technologies that provide cost-effective ways to enhance overall system performance. “The only way financial services companies will be able to meet the compliance demands being placed on them, and at the same time meet their efficiency and profitability targets, will be to improve the efficiency of their existing capacity—especially as regards I/O reduction.”

At Condusiv, that’s our business. We’ve seen users of our I/O reduction software solutions increase the capability of their storage and servers, including SQL servers, by 30% to 50% or more. In some cases, we’ve seen results as high as 10X initial performance—without the need to purchase a single box of new hardware.

If you’re interested in working with a firm that can reduce your two biggest silent killers of SQL performance, request a demo with an I/O performance specialist now.

 

For an explanation of why your heaviest workloads are only processing half the throughput they should from VM to storage, view this short video.

 

How to make NVMe storage even faster

$
0
0

This is a blog to complement a vlog that I posted a few weeks ago, in which I demonstrated how to use the intelligent RAM caching technology found in the V-locity® software from Condusiv® Technologies to improve the performance that a computer can get from NVMe flash storage. You can view this video here:

 

 A question arose from a couple of long-term customers about whether the use of the V-locity software was still relevant if they started utilizing very fast, flash storage solutions. This was a fair question!

The V-locity software is designed to reduce the amount of unnecessary storage I/O traffic that actually has to go out and be processed by the underlying disk storage layer. It not only reduces the amount of I/O traffic, but it optimizes that which DOES have to go out to disk, and moreover, it further reduces the workload on the storage layer by employing a very intelligent RAM caching strategy.

So, given that flash storage, whilst not only becoming more prevalent in today’s compute environments, can process storage I/O traffic VERY fast when compared to its spinning disk counterparts, and is capable of processing more I/Os per Second (IOPS) than ever before, the very sensible question was this:


"Can the use of Condusiv's V-locity software provide a significant performance increase when using very fast flash storage?"


As I was fortunate to have recently implemented some flash storage in my workstation, I was keen to run an experiment to find out.


SPOILER ALERT: For those of you who just want to have

the question answered, the answer is a resounding YES!

The test showed beyond doubt that with Condusiv’s V-locity software installed, your Windows computer has the ability to process significantly more I/Os per Second, process a much higher throughput of data, and allow the storage I/O heavy workloads running in computers the opportunity to get significantly more work done in the same amount of time – even when using very fast flash storage.

 

For those of you true ‘techies’ that are as geeky as me, read on, and I will detail the testing methodology and results in more detail. 

The storage that I now had in my workstation (and am still happily using!) was a 1 terabyte SM961 POLARIS M.2-2280 PCI-E 3.0 X 4 NVMe solid state drive (SSD).

 

 Is it as fast as it’s made out to be? Well, in this engineer’s opinion – OMG YES!

 

It makes one hell of a difference, when compared to spinning disk drives. This is in part because it’s connected to the computer via a PCI Express (PCIe) bus, as opposed to a SATA bus. The bus is what you connect your disk to in the computer, and different types of buses have different capabilities, such as the speed at which data can be transferred. SATA-connected disks are significantly slower than today’s PCIe-connected storage using an NVMe device interface. There is a great Wiki about this here if you want to read more: 

https://en.wikipedia.org/wiki/NVM_Express

 

To give you an idea of the improvement though, consider that the Advanced Host Controller Interface (AHCI) that is used with the SATA connected disks has one command queue, in which it can process 32 commands. That’s up to 32 storage requests at a time, and that was okay for spinning disk technology, because the disks themselves could only cope with a certain number of storage requests at a time.

NVMe on the other hand doesn’t have one command queue, it has 65,535 queues. AND, each of those command queues can themselves accommodate 65,536 commands. That’s a lot more storage requests that can be processed at the same time! This is really important, because flash storage is capable of processing MANY more storage requests in parallel than its spinning disk cousins. Quite simply NVMe was needed to really make the most of what flash disk hardware can do. You wouldn’t put a kitchen tap (faucet) on the end of a fire hose and expect the same amount of water to flow through it, right? Same principle!

As you can probably tell, I’m quite excited by this boost in storage performance. (I’m strange like that!) And, I know I’m getting a little off topic (apologies), so back to the point!

I had this SUPER-FAST storage solution and needed to prove one way or another if Condusiv’s V-locity software could increase the ability of my computer to process even more workload.

Would my computer be able to process more storage I/Os per Second?

Would my computer be able to process a larger amount of storage I/O traffic (megabytes) every second?

 

Testing Methodology

To answer these questions, I took a virtual machine, and cloned it so that I had two virtual machines that were as identical as I could make them. I then installed Condusiv’s V-locity software on both and disabled V-locity on one of the machines, so that it would process storage I/O traffic, just as if V-locity wasn’t installed.

To generate a storage I/O traffic workload, I turned to my old friend IOMETER. For those of you who might not know IOMETER, this is a software utility originally designed by Intel, but is now open source and available at SourceForge.net. It is designed as an I/O subsystem measurement tool and is great for generating I/O workloads of different types (very customizable!), and measure how quickly that I/O workload can be processed. Great for testing networks or in this case, how fast you can process storage I/O traffic.

I configured IOMETER on both machines with the type of workload that one might find on a typical SQL database server. I KNOW, I know, there is no such thing as a ‘typical’ SQL database, but I wanted a storage I/O profile that was as meaningful as possible, rather than a workload that would just make V-locity look good. Here is the actual IOMETER configuration:

Worker 1 – 16 kilobyte I/O requests, 100% random, 33% Write / 67% Read

Worker 2 – 64 kilobyte I/O requests, 100% random, 33% Write / 67% Read

Test Results

V-locity Disabled

 

V-locity Enabled

 

Summary

 

 

Conclusion

 

In this lab test, the presence of V-locity reduced the average amount of time required to process storage I/O requests by around 65%, allowing a great amount of storage I/O requests to be processed per second and a greater amount of data to be transferred.

To prove beyond doubt that it was indeed V-locity that caused the additional storage I/O traffic to be processed, I stopped the V-locity service. This immediately ‘turned off’ all of the RAM caching and other optimization engines that V-locity was providing, and the net result was that the IOPS and throughput dropped to normal as the underlying storage had to start processing ALL of the storage traffic that IOMETER was generating.

What value is there to reducing storage I/O traffic?

The more you can reduce storage I/O traffic that has to go out and be processed by your disk storage, the more storage I/O headroom you are handing back to your environment for use by additional workloads. It means that your current disk storage can now cope with:

·       - More computers sharing the storage. Great if you have a Storage Area Network (SAN) underpinning your virtualized environment, for example. More VMs running!

 

·       - More users accessing and manipulating the shared storage. The more users you have, the more storage I/O traffic is likely to be generated.

·       - Greater CPU utilization. CPU speeds and processing capacity keeps increasing. Now that the processing power is typically much more than typical needs, V-locity can help your applications become more productive and use more of that processing power by not having to wait so much on the disk storage layer.

 

If you can achieve this without having to replace or upgrade your storage hardware, it not only increases the return on your current storage hardware investment, but also might allow you to keep that storage running for a longer period of time (if you’re not on a fixed refresh cycle).

Sweat the storage asset!

(I hate that term, but you get the idea)

When you do finally need to replace your current storage, perhaps it won’t be as costly as you thought because you’re not having to OVER-PROVISION the storage as much, to cope with all of the excess, unnecessary storage traffic that Condusiv’s V-locity software can eliminate.

I typically see a storage traffic reduction of at least 25% at customer sites.

AND, I haven’t even mentioned the performance boost that many workloads receive from the RAM caching technology provided by Condusiv’s V-locity software. It is worth remembering that as fast as today’s flash storage solutions are, the RAM that you have in your computers is faster! The greater the percentage of read I/O traffic that you can satisfy from RAM instead of the storage layer, the better performing those storage I/O-hungry applications are likely to be.

What type of applications benefit the most?

In the real world, V-locity is not a silver-bullet for all types of workloads, and I wouldn’t insult your intelligence by saying that it was. If you have some workloads that don’t generate a great deal of storage I/O traffic, perhaps a DNS server, or DHCP server, well, V-locity isn’t likely to make a huge difference. That’s my honest opinion as an IT Engineer.

HOWEVER, if you are using storage I/O-hungry applications, then you really should give it a try.

Here are just some examples of the workloads that thousands of V-locity customers are ‘performance-boosting’ with Condusiv’s I/O reduction and RAM caching technologies:

  • -Database solutions such as Microsoft SQL Server, Oracle, MySQL, SQL Express, and others.
  • -Virtualization solution such as Microsoft Hyper-V and VMware.
  • -Enterprise Resource Planning (ERP) solutions like Epicor.
  • -Business Intelligence (BI) solutions like IBM Cognos.
  • -Finance and payroll solutions like SAGE Accounting.
  • -Electronic Health Records (EHR) solutions, such as MEDITECH 
  • -Customer Relationship Management (CRM) solutions, such as Microsoft Dynamics.
  • -Learning Management Systems (LMS Solutions.
  • -Not to mention email servers like Microsoft Exchange AND busy file servers.

 

 

Do you use any of these in your IT environment?

 

There are case studies on the Condusiv web site for all of these workload types (and more), here:

http://www.condusiv.com/knowledge-center/case-studies/default.aspx

 

Try it for yourself

You can experience the full power of Condusiv’s V-locity software for yourself, in YOUR Windows environment within a couple of minutes. Just go to www.condusiv.com/try and get a copy of the fully-featured 30-day trialware. You can check the dashboard reporting after a week or two and see just how much storage I/O traffic has been eliminated, and more importantly, how much storage time has been saved by doing do.

It really is that simple!

You don’t even need to reboot to make the software work. There is no disruption to live running workloads; you can just install and uninstall at will, and it only takes a minute or so.


You will typically start seeing results just minutes after installing.

I hope that this has been interesting and helpful. If you have any questions about the technologies within V-locity or have any questions about testing, feel free to email me directly at sallingham@condusiv.co.uk.

 

I will be delighted to hear from you!

 

 

Big Data Boom Brings Promises, Problems

$
0
0

By 2020, an estimated 43 trillion gigabytes of data will have been created—300 times the amount of data in existence fifteen years earlier. The benefits of big data, in virtually every field of endeavor, are enormous. We know more, and in many ways can do more, than ever before. But what of the challenges posed by this “data tsunami”? Will the sheer ability to manage—or even to physically house—all this information become a problem?

Condusiv CEO Jim D’Arezzo, in a recent discussion with Supply Chain Brain, commented that “As it has over the past 40 years, technology will become faster, cheaper, and more expansive; we’ll be able to store all the data we create. The challenge, however, is not just housing the data, but moving and processing it. The components are storage, computing, and network. All three need to be optimized; I don’t see any looming insurmountable problems, but there will be some bumps along the road.”

One example is healthcare. Speaking with Healthcare IT News, D’Arezzo noted that there are many new solutions open to healthcare providers today. “But with all the progress,” he said, “come IT issues. Improvements in medical imaging, for instance, create massive amounts of data; as the quantity of available data balloons, so does the need for processing capability.”

Giving health-care providers—and professionals in other areas—the benefits of the data they collect is not always easy. In an interview with Transforming Data with Intelligence, D’Arezzo said, “Data center consolidation and updating is a challenge. We run into cases where organizations do consolidation on a ‘forklift’ basis, simply dumping new storage and hardware into the system as a solution. Shortly thereafter, they often discover that performance has degraded. A bottleneck has been created that needs to be handled with optimization.”

The news is all over it. You are experiencing it. Big data. Big problems. At Condusiv®, we get it.  We’ve seen users of our I/O reduction software solutions increase the capability of their storage and servers, including SQL servers, by 30% to 50% or more. In some cases, we’ve seen results as high as 10X initial performance—without the need to purchase a single box of new hardware. The tsunami of data—we’ve got you covered.

If you’re interested in working with a firm that can reduce your two biggest silent killers of SQL performance, request a demo with an I/O performance specialist now.

If you want to hear why your heaviest workloads are only processing half the throughput they should from VM to storage, view this short video.

Why Faster Storage May NOT Fix It

$
0
0

 

With all the myriad of possible hardware solutions to storage I/O performance issues, the question that people are starting to ask is something like:

         If I just buy newer, faster Storage, won’t that fix my application performance problem?

 The short answer is:

         Maybe Yes (for a while), Quite Possibly No.

I know – not a satisfying answer.  For the next couple of minutes, I want to take a 10,000-foot view of just three issues that affect I/O performance to shine some technical light on the question and hopefully give you a more satisfying answer (or maybe more questions) as you look to discover IT truth.  There are other issues, but let’s spend just a moment looking at the following three:

1.     Non-Application I/O Overhead

2.     Data Pipelines

3.     File System Overhead

These three issues by themselves can create I/O bottlenecks causing degradation to your applications of 30-50% or more.

Non-Application I/O Overhead:

One of the most commonly overlooked performance issues is that an awful lot of I/Os are NOT application generated.  Maybe you can add enough DRAM and go to an NVMe direct attached storage model and get your application data cached at an 80%+ rate.  Of course, you still need to process Writes and the NVMe probably makes that a lot faster than what you can do today.  But you still need to get it to the Storage.  And, there are lots of I/Os generated on your system that are not directly from your application.  There’s also lots of application related I/Os that are not targeted for caching – they’re simply non-essential overhead I/Os to manage metadata and such.  People generally don’t think about the management layers of the computer and application that have to perform Storage I/O just to make sure everything can run.  Those I/Os hit the data path to Storage along with the I/Os your application has to send to Storage, even if you have huge caches.  They get in the way and make your Application specific I/Os stall and slow down responsiveness.

And let’s face it, a full Hyper-Converged, NVMe based storage infrastructure sounds great, but there are lots of issues besides the enormous cost with that.  What about data redundancy and localization?  That brings us to issue # 2.

Data Pipelines: 

Since your data is exploding and you’re pushing 100s of Terabytes, perhaps Petabytes and in a few cases maybe even Exabytes of data, you’re not going to get that much data on your one server box, even if you didn’t care about hardware/data failures.  

Like it or not, you have an entire infrastructure of Servers, Switches, SANs, whatever.  Somehow, all that data needs to get to and from the application and wherever it is stored.  And if you add Cloud storage into the mix, it gets worse. At some point the data pipes themselves become the limiting factor.  Even with Converged infrastructures, and software technologies that stage data for you where it is supposedly needed most, data needs to be constantly shipped along a pipe that is nowhere close to the speed of access that your new high-speed storage can handle.  Then add lots of users and applications simultaneously beating on that pipe and you can quickly start to visualize the problem.

If this wasn’t enough, there are other factors and that takes us to issue #3.

File System Overhead:

You didn’t buy your computer to run an operating system.  You bought it to manipulate data.  Most likely, you don’t even really care about the actual application.  You care about doing some kind of work.  Most people use Microsoft Word to write documents.  I did to draft this blog.  But I didn’t really care about using Word.  I cared about writing this blog and Word was something I had, I knew how to use and was convenient for the task.  That’s your application, but manipulating the data is your real conquest.  The application is a tool to allow you to paint a beautiful picture of your data, so you can see it and accomplish your job better.

The Operating System (let’s say Windows), is one of a whole stack of tools between you, your application and your data.  Operating Systems have lots of layers of software to manage the flow from your user to the data and back.  Storage is a BLOB of stuff.  Whether it is spinning hard drives, SSDs, SANs, cloud-based storage, or you name it, it is just a canvas where the data can be stored.  One of the first strokes of the brush that will eventually allow you to create that picture you want from your data is the File System.  It brings some basic order.  You can see this by going into Windows File Explorer and perusing the various folders.  The file system abstracts that BLOB into pieces of data in a hierarchical structure with folders, files, file types, information about size/location/ownership/security, etc... you get the idea.  Before the painting you want to see from your data emerges, a lot of strokes need to be placed on the canvas and a lot of those strokes happen from the Operating and File Systems.  They try to manage that BLOB so your Application can turn it into usable data and eventually that beautiful (we hope) picture you desire to draw. 

Most people know there is an Operating System and those of you reading this know that Operating Systems use File Systems to organize raw data into useful components.  And there are other layers as well, but let’s focus.  The reality is there are lots of layers that have to be compensated for.  Ignoring file system overhead and focusing solely on application overhead is ignoring a really big Elephant in the room.

Let’s wrap this up and talk about the initial question.  If I just buy newer, faster Storage won’t that fix my application performance?  I suppose if you have enough money you might think you can.  You’ll still have data pipeline issues unless you have a very small amount of data, little if any data/compute redundancy requirements and a very limited number of users.  And yet, the File System overhead will still get in your way. 

When SSDs were starting to come out, Condusiv® worked with several OEMs to produce software to handle obvious issues like the fact that writes were slower and re-writes were limited in number. In doing that work, one of our surprise discoveries was that when you got beyond a certain level of file system fragmentation, the File System overhead of trying to collect/arrange the small pieces of data made a huge impact regardless of how fast the underlying storage was.  Just making sure data wasn’t broken down into too many pieces each time a need to manipulate it came along provided truly measurable and, in some instances, gave incredible performance gains. 

Then there is that whole issue of I/Os that have nothing to do with your data/application. We also discovered that there was a path to finding/eliminating the I/Os that, while not obvious, made substantial differences in performance because we could remove those out of the flow, thus allowing the I/Os your application wants to perform happen without the noise.  Think of traffic jams.  Have you ever driven in stop and go traffic and noticed there aren’t any accidents or other distractions to account for such slowness?  It’s just too many vehicles on the road with you.  What if you could get all the people who were just out for a drive, off the road?  You’d get where you want to go a LOT faster.  That’s what we figured out how to do.  And it turns out no one else is focused on that - not the Operating System, not the File System, and certainly not your application. 

And then you got swamped with more data.  Okay, so you’re in an industry where regulations forced that decision on you.  Either way, you get the point.  There was a time when 1GB was more storage than you would ever need.  Not too long ago, 1TB was the ultimate.  Now that embedded SSD on your laptop is 1TB.  Before too long, your phone will have 1TB of storage.  Mine has 128GB, but hey I’m a geek and MicroSD cards are cheap.  My point is that the explosion of data in your computing environment strains File System Architectures.  The good news is that we’ve built technologies to compensate for and fix limitations in the File System.

Let me wrap this up by giving you a 10,000-foot view of us and our software.  The big picture is we have been focused on Storage Performance for a very long time and at all layers.  We’ve seen lots of hardware solutions that were going to fix Storage slowness.  And we’ve seen that about the time a new generation comes along, there will be reasons it will still not fix the problem.  Maybe it does today, but tomorrow you’ll overtax that solution as well.  As computing gets faster and storage gets denser, your needs/desires to use it will grow even faster.  We are constantly looking into the crystal ball knowing the future presents new challenges.  We know by looking into the rear-view mirror, the future doesn’t solve the problem, it just means the problems are different.  And that’s where I get to have fun.  I get to work on solving those problems before you even realize they exist.  That’s what turns us on.  That’s what we do, and we have been doing it for a long time and, with all due modesty, we’re really good at it! 

So yes, go ahead and buy that shiny new toy.  It will help, and your users will see improvements for a time.  But we’ll be there filling in those gaps and your users will get even greater improvements.  And that’s where we really shine.  We make you look like the true genius you are, and we love doing it.

  

 

Cultech Limited Solves ERP and SQL Troubles with Diskeeper 18 Server

$
0
0

Before discovering Diskeeper®, Cultech Limited experienced sluggish ERP and SQL performance, unnecessary downtime, and lost valuable hours each day troubleshooting issues related to Windows write inefficiencies.

As an internationally recognized innovator and premium quality manufacturer within the nutritional supplement industry, the usual troubleshooting approaches just weren’t cutting it. “We were running a very demanding ERP system on legacy servers and network. A hardware refresh was the first step in troubleshooting our issues. As much as we did see some improvement, it did not solve the daily breakdowns associated with our Sage ERP,” said Rob, IT Manager, Cultech Limited.

After upgrading the network and replacing ERP and SQL servers and not seeing much improvement, Rob further dug into troubleshooting approaches and SQL optimizations. With months of troubleshooting and SQL optimizations and no relief, Rob continued to research and find a way to improve performance issues, knowing that Cultech could not continue to interrupt productivity multiple times a day to fix corrupted records. As Rob explains, “I was on support calls with Sage literally day and night to solve issues that occurred daily. Files would not write properly to the database, and I would have to go through the tedious process of getting all users to logout of Sage then manually correct the problem – a 25-min exercise. That might not be a big deal every so often, but I found myself doing this 3-4 times a day at times.”

In doing his research, Rob found Condusiv’s® Diskeeper Server and decided to give it a try after reading customer testimonials on how it had solved similar performance issues. To Cultech’s surprise, after just 24-hours of being installed, they were no longer calling Sage support. “I installed Diskeeper and crossed my fingers, hoping it would solve at least some of our problems. It didn’t just solve some problems, it solved all of our problems. I was calling Sage support daily then suddenly I wasn’t calling them at all,” said Rob. Problems that Rob was having to fix outside of production hours had been solved thanks to Diskeeper’s ability to prevent fragmentation from occurring. And in addition to recouping hours a day of downtime during production hours, Cultech was now able to focus this time and energy on innovation and producing quality products.

“Now that we have Diskeeper optimizing our Sage servers and SQL servers, we have it running on our other key systems to ensure peak performance and optimum reliability. Instead of considering Windows write inefficiencies as a culprit after trying all else, I would encourage administrators to think of it first,” said Rob.

Read the full case study                        Download 30-day trial

Fix SQL Server Storage Bottlenecks

$
0
0

No SQL code changes.
No Disruption.
No Reboots.
Simple!

 

Condusiv V-locity Introduction

 

 

Whether running SQL in a physical or virtualized environment, most SQL DBAs would welcome faster storage at a reasonable price.

The V-locity® software from Condusiv® Technologies is designed to provide exactly that, but using the storage hardware that you already own. It doesn't matter if you have direct attached disks, if you're running a tiered SAN, have a tray of SSD storage or are fortunate enough to have an all-flash array; that storage layer can be a limiting factor to your SQL Server database productivity.

The V-locity software reduces the amount of storage I/O traffic that has to go out and be processed by the disk storage layer, and streamlines or optimizes the data which does have to still go out to disk.

The net result is that SQL can typically get more transactions completed in the same amount of time, quite simply because on average, it's not having to wait so much on the storage before being able to get on with its next transaction.

V-locity can be downloaded and installed without any disruption to live SQL servers. No SQL code changes are required and no reboots. Just install and typically you'll start seeing results in just a few minutes.

Microsoft SQL Server I/O Reliability Certification LogoBefore we take a more in-depth look at that, I would like to briefly mention that last year, the V-locity software was awarded the Microsoft SQL Server I/O Reliability Certification. This means that whilst providing faster storage access, V-locity didn't adversely affect the required and recommended behaviors that an I/O subsystem must provide for SQL Server, as defined by Microsoft themselves.

Microsoft ran tests for this in Azure, with SQL 2016, and used HammerDB to generate an online transaction processing type workload. Not only was V-locity able to jump through all the hoops necessary to achieve the certification, but it was also able to show an increase of about 30% more SQL transactions in the same amount of time.

In this test, that meant roughly 30% more orders processed.

They probably could have processed more too, if they had allowed V-locity a slightly larger RAM cache size.

To get more information, including best practise for running V-locity on MS SQL servers, easy ways to validate results, customer case studies and more, click here for the full article on LinkedIn.

If you simply want to try V-locity, click here for a free trial.

Use the V-locity software to not only identify those servers that cause storage I/O issues, but fix those issues at the same time.


When It Really NEEDS To Be Deleted

$
0
0

In late May of this year, the European Union formally adopted an updated set of rules about personal data privacy called the General Data Protection Regulation. Condusiv CEO Jim D’Arezzo, speaking with Marketing Technology Insights, said, “Penalties for noncompliance with GDPR are severe. They can be as much as 4% of an offending company’s global turnover, up to a total fine of 20 million.” 

A key provision of GDPR is the right to be forgotten, which enables any European citizen to have his or her name and identifying data permanently removed from the archives of any firm holding that data in its possession. One component of the right to be forgotten, D’Arezzo notes, is called “right to erasure,” which requires that the data be permanently deleted, i.e. irrecoverable.

Recently, the EU government has begun cracking down on international enterprises, attempting to extend the EU’s right-to-erasure laws to all websites, regardless of where the traffic originates. Many affected records consist not of fields or records in a database, but of discrete files in formats such as Excel or Word. 

So to stay compliant with GDPR—which, the EU being the world’s largest market and twenty million euros being a lot of money—you need to be able to delete a file to the point that you can’t get it back. On the other hand, files get deleted by accident or mistake all the time; unless you want to permanently cripple your data archive, you need to be able to get those files back (quickly and easily).

In other words, you need a two-edged sword. For Windows-based systems, that’s exactly what’s provided by our Undelete® product line. Up to a point, any deleted file or version of an Office file can be easily restored, even if it was deleted before Undelete was installed.

If, however—as in the case of a confirmed “right to erasure” request—you need to delete it forever, you use Undelete’s SecureDelete® feature. Using specific bit patterns specified by the US National Security Agency, SecureDelete will overwrite the file to help make it unrecoverable. A second feature, Wipe Free Space, will overwrite any free space on a selected volume, using the same specific bit patterns, to clear out any previously written data in that free space.

So with Undelete, you’re covered both ways. Customers buy it for its recovery abilities: you need to be able to hit the “oops” button and get a file back. But it can also handle the job when you need to make sure a file is gone.

 

"No matter how redundant my backups are, how secure our security is, I will always have the one group of users that manage to delete that one critical file. I have found Undelete to be an invaluable tool for just such an occasion. This software has saved us both time and money. When we migrated from a Novell Infrastructure, we needed to find a solution that would allow us to restore ‘accidentally’ deleted data from a network share. Since installing Undelete on all my servers, we have had no lost data due to accidents or mistakes."
–Juan Saldana II, Network Supervisor, Keppel AmFELS Juan Saldana II,
Network Supervisor, Keppel AmFELS

 

For Undelete help with servers or virtual systems, click Undelete Server

To save money with Undelete on Business PCs, click here Undelete Professional

You can purchase Undelete immediately online or download a free 30-day trial.

Industry-first FAL Remediation and Improved Performance for MEDITECH

$
0
0

When someone mentions heavy fragmentation on a Windows NTFS Volume, the first thing that usually comes to mind is performance degradation. While performance degradation is certainly bad, what’s worse is application failure when the application gets this error.

 

Windows Error - "The requested operation could not be completed due to a file system limitation“

 

That is exactly what happens in severely fragmented environments. These are show-stoppers that can stop a business in its tracks until the problem is remediated. We have had users report this issue to us on SQL databases, Exchange server databases, and cases involving MEDITECH EHR systems.

In fact, because of this issue, MEDITECH requires all 5x and 6x customers to address this issue and has endorsed both Condusiv® Technologies’ V-locity® and Diskeeper® I/O reduction software for “...their ability to reduce disk fragmentation and eliminate File Attribute List (FAL) saturation. Because of their design and feature set, we have also observed they accelerate application performance in a measurable way,” said Mike Belkner, Associate VP, Technology, MEDITECH.

Some refer to this extreme fragmentation problem as the “FAL Size Issue” and here is why. In the Windows NTFS file system, as files grow in size and complexity (i.e., more and more fragmented data), they can be assigned additional metadata structures. One of these metadata structures is called the File Attribute List (FAL). The FAL structure can point to different types of file attributes, such as security attributes or standard information such as creation and modification dates and, most importantly, the actual data contained within the file. In the extremely fragmented file case, the FAL will keep track of where all the fragmented data is for the file. The FAL actually contains pointers indicating the location of the file data (fragments) on the volume. As more fragments accumulate in a file, more pointers to the fragmented data are required, which in turn increases the size of the FAL. Herein lies the problem: the FAL size has anupper limitation size of 256KB. When that limit is reached, no more pointers can be added, which means NO more data can be added to the data file. And, if it is a folder file, NO more files can be added under that folder file. Applications using these files stop in their tracks, not what users want, especially in EHR systems.

If a FAL reaches the size limitation, the only resolution was to bring the volume offline, which can mean bringing the system down, then copying the file to a different location (a different volume is recommended), deleting or renaming the original file, making sure there is sufficient contiguous free space on the original volume, rebooting the system to reset the free space cache, then copying the file back. This is not a quick cycle, and if that file is large in size, this process can take hours to complete, which means the system will remain offline for hours while attempting to resolve.

You would think that the logical solution would be – why not just defragment those files? The problem is that traditional defragmentation utilities can cause the FAL size to grow. While it can decrease the number of pointers, it will not decrease the FAL size. In fact, due to limitations within the file system, traditional methods of defragmenting files cause the FAL size to grow even larger, making the problem worse even though you are attempting to remediate it. This is true with all other defragmenters, including the built-in defragmenter that comes with Windows. So what can be done about it?

The Solution

Condusiv Technologies has introduced a new technology to address this FAL size issue that is unique only to the latest V-locity® and Diskeeper® product lineup. This new technology called MediWrite™ contains features to help suppress this issue from occurring in the first place, give sufficient warning if it is or has occurred, plus tools to quickly and efficiently reducing the FAL size offline. It includes the following:

Unique FAL handling: As indicated above, traditional methods of defragmentation can cause the

FAL size to grow even further. MediWrite will detect when files are having FAL size issues and will use an exclusive method of defragmentation that helps stem the FAL growth. An industry first!

It will also automatically determine how often to process these files according to their FAL size severity.

Enhanced Free space consolidation engine: One indirect cause of FAL size growth is the extreme free space fragmentation found in these cases. A new Free Space method has been developed to handle these extreme cases.

Unique FAL growth prevention: Along with MediWrite, V-locity and Diskeeper contain another very important technology called IntelliWrite® which automatically prevents new fragmentation from occurring. By preventing fragmentation from occurring, IntelliWrite minimizes any further

FAL size growth issues.

Unique Offline FAL Consolidation tools: The above technologies help stop the FAL size from growing any larger, but due to File System restrictions, it cannot shrink or reduce the FAL size online. To do this, Condusiv developed proprietary offline tools that will reduce the FAL-IN-USE size in minutes.  This is extremely helpful for companies that already have a file FAL size issue before installing our software. With these tools, the user can reduce the FAL-IN-USE size back down to 100kb, 50kb, or smaller and feel completely safe from the maximum FAL size limits. The reduction process itself takes less than 5 minutes. This means that the system will only need to be taken offline for minutes which is much better than all the hours needed with the current Windows copy method.

FAL size Alerts: MediWrite will dynamically scan the volumes for any FAL sizes that have reached a certain limit (the default is a conservative 50% of the maximum size) and will create an Alert indicating this has occurred. The Alert will also be recorded in the Windows Event log, plus the user has the option to get notified by email when this occurrence happens.

 

For information, case studies, white papers and more, visit  http://www.condusiv.com/solutions/meditech-solutions/

Finance Company Deploys V-locity I/O Reduction Software for Blazing Fast VDI

$
0
0

When the New Mexico Mortgage Finance Authority decided to better support their users by moving away from using physical PCs and migrating to a virtual desktop infrastructure, the challenge was to ensure the fastest possible user experience from their Horizon View VDI implementation.

“Anytime an organization starts talking about VDI, the immediate concern in the IT shop is how well we will be able to support it from a performance standpoint to ensure a pristine end user experience. Although supported by EMC VNXe flash storage with high IOPS, one of our primary concerns had to do with Windows write inefficiencies that chews up a large percentage of flash IOPS unnecessarily. When you’re rolling out a VDI initiative, the one thing you can’t afford to waste is IOPS,” said Joseph Navarrete, CIO, MFA.

After Joseph turned to Condusiv’s “Set-It-and-Forget-It®” V-locity® I/O reduction software and bumped up the memory allocation for his VDI instances, V-locity was able to offload 40% of I/O from storage resulting in a much faster VDI experience to his users. When he demo’d V-locity on his MS-SQL server instances, V-locity eliminated 39% of his read I/O traffic from storage due to DRAM read caching and another 40% of write I/O operations by solving Windows write inefficiencies at the source.

After seeing the performance boost and increased efficiency to his hardware stack, Joseph ensured V-locity was running across all his systems like MS-Exchange, SharePoint, and more.

“With V-locity I/O reduction software running on our VDI instances, users no longer have to wait extra time. The same is now true for our other mission critical applications like MS-SQL. The dashboard within the V-locity UI provides all the necessary analytics about our environment and view into what the software is actually doing for us. The fact that all of this runs quietly in the background with near-zero overhead impact and no longer requires a reboot to install or upgrade makes the software truly “set and forget,” said Navarrete.

 

Read the full case study                        Download 30-day trial

This is Why a Top Performing School Recommended Condusiv Software and Doubled Performance

$
0
0

Lawnswood School is one of the top performing educational institutions in the UK. Their IT environment supports a workload that is split between approximately 1200 students and 200 staff.  With 1400 people and various programs and files to support, they were in search of something to help extend the life of their hardware and increase performance. They turned to Condusiv®’s V-locity® and Diskeeper® to extend the life of their storage hardware and maintain performance for their students and staff.

"Condusiv's V-locity software eliminated almost 50% of all storage I/O requests from having to be dealt with by the disk storage (SAN) layer, and that meant that when replacing the old SAN storage with a 'like-for-like' HP MSA 2040, V-locity gave me the confidence to make the purchase without having to over-spend to over-provision the storage in order to cope with all the excess unnecessary storage I/O traffic that V-locity efficiently eliminates,"said Noel Reynolds, IT Manager at Lawnswood School. Before upgrading his SAN, Noel was able to extend the life of the HP MSA 2000 SAN for 8 years “thanks to Condusiv’s I/O reduction software”.

Knowing how well the IT environment was performing at Lawnswood School, another school reached out to Noel for help, as their IT environment was almost identical, but suffering from slow and sluggish performance. They also had three VMware hosts of the same specification, the older HP MSA 2000 SAN storage and workloads that were pretty much identical. Noel Reynolds noted that: "They were almost a 'clone' school."

He continued: "I did the usual checks to discover why it wasn't working well, such as upgrading the firmware, checking the disks for errors and found nothing wrong other than bad storage performance. After comparing the storage latency, I found that Lawnswood School's disk storage was 20 times faster, even though the hardware, software and workload types were pretty much identical."

"We identified six of the 'most hit' servers and installed Condusiv's software on them. Within 24 hours, we saw a 50% boost in performance. Visibly improved performance had been returned to the users, and this really helped the end user experience.

A great example of a real-world solution." Noel concluded

 

Read the full case study                        Download 30-day trial

Undelete 11 coming soon – User Feedback Drives New Features

$
0
0

Soon to be released is a new major version of Undelete. I have been able to preview a pre-release version of this new Undelete and wanted to share the new enhancements. These changes were driven from current Undelete customer feedback looking for further improvement of the product. In a later blog, I will go into each new feature in more detail, but for now, I just wanted to briefly list some of these new features that will be soon available to you.

Ø  New User Interface: Undelete now has a familiar File Explorer-like interface that is easy to navigate, which makes it easy to find and recover deleted files.

o   The interface is also much faster and more responsive than before.

o   A Drag and Drop feature has been added to easily recover local files from the Undelete Recovery Bin.

 

Ø  Expanded File version protection: In previous Undelete editions, the popular ‘Versioning’ feature was limited to just Microsoft Office files. This versioning protection has been expanded to other file types.  This means that if you accidently save a new version of a file with incorrect changes, Undelete can help you go back to the previous version to recover from those unwanted changes.

Ø  Enhance Search Wizard: Expanded search capabilities have been added to help find the user’s deleted files in instances where the user cannot recall the name of the file or where it was located. This includes wild card names search capabilities, plus deleted date ranges, plus who deleted the file.

Ø  Inclusion List: For those users who only want specific deleted folders, file names, or file types to be protected, they can now specify them with this inclusion list capability.

Ø  Cloud Support: The Common Recovery Bin can now be stored or located in the cloud using OneDrive and other hosting capabilities. This has several benefits, including saving space on your local storage, plus protecting these recovery files from security threats like ransomware.

 I look forward to our customers using this new and improved release of Undelete.

Can you relate? 906 IT Pros Talk About I/O Performance, Application Troubles and More

$
0
0

We just completed our 5th annual I/O Performance Survey that was conducted with 906 IT Professionals. This is the industry’s largest study of its kind and the research highlights the latest trends in applications that are driving performance demands and how IT Professionals are responding.

I/O Growth Continues to Outpace Expectations

The results show that organizations are struggling to get the full lifecycle from their backend storage as the growth of I/O continues to outpace expectations. The research also shows that IT Pros continue to struggle with user complaints related to sluggish performance from their I/O intensive applications, especially citing MS-SQL applications.

Comprehensive Research Data

The survey consists of 27 detailed questions designed to identify the impact of I/O growth in the modern IT environment. In addition to multiple choice questions, the survey included optional open responses, allowing a respondent to provide commentary on why they selected a particular answer.  All the individual responses have been included to help readers dive deeply on any question. The full report is available at https://learn.condusiv.com/2019-IO-Performance-Study.html

Summary of Key Findings 

1.    I/O Performance is important to IT Pros: The vast majority of IT Pros consider I/O Performance an important part of their job responsibilities. Over a third of these note that growth of I/O from applications is outpacing the useful lifecycle they expect from their underlying storage. 

2.    Application performance is suffering: Half of the IT Pros responsible for I/O performance cite they currently have applications that are tough to support from a systems performance standpoint. The toughest applications stated were: SQL, SAP, Custom/Proprietary apps, Oracle, ERP, Exchange, Database, Data Warehouse, Dynamics, SharePoint, and EMR/EHR. See page 20 for a word cloud graphic. 

3.    SQL is the top troublesome application: The survey confirms that SQL databases are the top business critical application platform and is also the environment that generates the most storage I/O traffic. Nearly a third of the IT Pros responsible for I/O performance state that they are currently experiencing staff/customer complaints due to sluggish applications running on SQL. 

4.    Buying hardware has not solved the performance problems: Nearly three-fourths of IT Pros have added new hardware to improve I/O performance. They have purchased new servers with more cores, new all-flash arrays, new hybrid arrays, server-side SSDs, etc. and yet they still have concerns. In fact, a third have performance concerns that are preventing them from scaling their virtualized infrastructures.  

5.    Still planning to buy hardware: About three-fourths of IT Pros are still planning to continue to invest in hardware to improve I/O performance. 

6.    Lack of awareness: Over half of respondents were unaware of the fact that Windows write inefficiencies generate increasingly smaller writes and reads that dampen performance and that this is a software problem that is not solved by adding new hardware. 

7.    Improve performance via software to avoid expensive hardware purchase: The vast majority of respondents felt it would be urgent/important to improve the performance of their applications via an inexpensive I/O reduction software and avoid an expensive forklift upgrade to their compute, network or storage layers. 

Most Difficult to Support Applications

Below is a word cloud representing hundreds of answers to visually show the application environments IT Pros are having the most trouble to support from a performance standpoint. I think you can see the big ones that pop out!

The full report is available at https://learn.condusiv.com/2019-IO-Performance-Study.html

 

The Simple Software Answer

As much as organizations continue to reactively respond to performance challenges by purchasing expensive new server and storage hardware, our V-locity® I/O reduction software offers a far more efficient path by guaranteeing to solve the toughest application performance challenges on I/O intensive systems like MS-SQL. This means organizations are able to offload 50% of I/O traffic from storage that is nothing but mere noise chewing up IOPS and dampening performance. As soon as we open up 50% of that bandwidth to storage, sluggish performance disappears and now there’s far more storage IOPS to be used for other things.

In just 2 minutes, learn more about how V-locity I/O reduction software eliminates the two big I/O inefficiencies in a virtual environment 2-min Video: Condusiv® I/O Reduction Software Overview

Try it for yourself, download our free 30-day trial– no reboot required

 

Condusiv Introduces Undelete 11, World’s Leading Windows Enterprise Data Protection and Instant Recovery

$
0
0

Undelete® 11 protects Windows® servers, desktops and laptops from data protection gaps that risk data loss, productivity downtime and money.

 

New Undelete 11, dramatically enhances the capabilities of our industry-leading Undelete series of data protection and recovery software. New functionality in Undelete 11 makes file recovery easier, faster, provides version protection for custom file types as well as cloud support to offer additional protection from security threats like ransomware

 

Undelete Fills Data Protection Gaps

“Help! I just deleted a file from the network drive!” That’s a support call any IT professional knows all too well. “When I hear that organizations are relying only on backups or snapshots or even ‘the cloud’ as a complete data protection solution, I cringe,” says Condusiv CEO James D’Arezzo. “That leaves too many gaps for a modern data center. Undelete fills the gaps and provides a first line of defense since Undelete provides true continuous data protection for easy recovery of individual files. Undelete is tailored for quick recovery of single files that are lost or overwritten.”

 

Traditional data recovery measures such as backups or snapshots, notes D’Arezzo, can require hours to restore a single file, during which time IT personnel are pulled away from more productive activities. Backups, moreover, do not capture changes made since the last backup, so work performed during the time between backups is not recoverable or protected. While snapshots can fill some of these gaps, scheduling constant snapshots and managing space utilization only adds to the system administrator’s workload.

 

Undelete’s Powerful Recovery Bin

With Undelete 11, when a file is deleted—including files deleted or overwritten from shared network folders or deleted by the Windows command prompt—it is automatically captured and stored in the Undelete Recovery Bin. The Server, Professional, and Client editions of Undelete provide access to Recovery Bins on remote computers, allowing IT staff or users (just files they have rights too) to recover deleted files across the network. This makes it unnecessary to search backups or Windows shadow copies when a user accidentally deletes a file from the server. Undelete’s own “Dig Deeper/Emergency Undelete” feature enables the user to search a drive or directory for files that were previously deleted, enabling the restoration of files that were deleted even before Undelete was installed.

 

Undelete Saves the Day

Over 50,000 organizations rely on Undelete for instant file recovery. “We frequently hear customers say, “Undelete Saved the Day!” by being able to recover files they never thought they would be able to get back, and Undelete recovered them instantly,” added D’Arezzo.

 

“Engineers have accidentally deleted very critical files on a network share.  We've been able to quickly find/recover those files.  In addition, the versioning capability has also been very valuable in that we could go back one or two (or more) versions.  It's much better/faster than retrieving from a backup.  Since backups are nightly, the last backup might not even have the file if it were created and deleted on the same day.”Bob Sauers, Senior IT Manager, PCTEST Engineering Laboratory Inc.

 

“We use Undelete so often that almost all of our users know we have it, and count on it, to save them time in recovering files or folders that were accidentally deleted.”John Brigan, Sr Infrastructure Analyst, Marion County Board of County Commissioners

 

“We have a process where our clients 'upload' billing files to us for processing.  Due to PCI compliance we have to receive the file, seamlessly move it to the Secure LAN segment and delete it off the DMZ Web Server.  The successful moving of the file fails when there is a breakdown across the network, for whatever reason, but the Local Delete always works.  So instead of having to call the client and have them re-upload the file, we simply go to Undelete, recover the file and move on.  No company likes to call the customer to ask them to take action to solve an internal technical problem.  Undelete works.”Kem Sisson, President, Money Movers Inc.

 

“Helped us recover CAD files when a large amount was accidentally deleted. Saved us time as we did not have to mount backup drives.”Jeff Hafer, Applications Developer-Systems Administrator, Batesville Tool & Die

 

“We use this tool constantly and has been very vital in several situations where executive staff had been working on a presentation or spreadsheet for an hour or more and somehow deleted it and called our help desk to see if we could recover a file they had just created and accidentally deleted it.  The ability to recover that file has been crucial and being able to provide the customer service with immediate results is great.”Steve Lauer, Senior Systems Administrator, Maricopa County Clerk of the Superior Court

 

New Undelete 11 Features

    •  Recovery Bin– Provides complete file protection by capturing all deleted files, including files deleted from network shares, allowing instant recovery with just a few clicks.
    •  (New) New User Interface– Easy to use, simple to navigate and intuitive – Familiar File Explorer-like interface makes finding and recovering deleted files even easier.
    •  Version Protection for Microsoft Office files– Captures old versions of Microsoft Office files allowing recovery on intermediate versions of documents.
    •  (New) Expanded File Version Protection– File versioning has been expanded to include custom file types in addition to Microsoft Office files such as CAD, Photoshop files, PDFs and more. Simply customize what files types you want to apply version protection to in settings.
    •  (New) Enhanced Search Features– Easy to search for deleted files by multiple criteria. One button search wizard. New capabilities include options to search by a date range, deleted by a particular person or from a specific folder.
    •  (New) Faster Search and Recovery– the speed of search and recovery has been improved.
    •  (New) Cloud Support– The Recovery Bin can now be stored in the cloud using OneDrive and other popular file hosting services for additional protection from security threats like ransomware.
    • (New) Inclusion List– Provides the ability to customize only specific deleted files, folders or file types to be saved by Undelete.
    •  (New) Drag and Drop– Users can drag and drop files from their local Recovery Bin to a local drive. 
    • Dig Deeper/Emergency Undelete– Recover files deleted before Undelete was installed.
    •  Secure Delete®– In compliance with corporate governance or governmental regulatory requirements for secure data deletion, securely erase files deleted so they are virtually unrecoverable. Using a bit pattern specified by the National Security Agency (NSA) for the Department of Defense, SecureDelete not only deletes a file but overwrites the disk space the file previously occupied making it virtually impossible for anyone to access. This is an important feature to assist with right to be forgotten provisions in regulations, such as in the GDPR. 
    •  Wipe Free Space– Securely erase free space so remnants of data cannot be recovered.
    • Supports removable disks such as ZIP drives, Flash or Thumb Drives and Memory Cards
    • (New) Windows Desktops Themes– Your favorite Desktop Theme is now supported in Undelete Professional edition.


 

Available Undelete Editions

    • Undelete 11 Server– Protects server files, including those deleted by network clients. Manage Undelete Server and Professional on remote systems.
    • Undelete 11 Desktop Client– Allows a user on connected laptops, workstations and VMs to recover their own files from Undelete Server recovery bins
    • Undelete 11 Professional – Protects locally stored files and allows files to be recovered from Undelete Server recovery bins
    •  Undelete 11 Home– Provides comprehensive protection of locally stored files

 

- Undelete customers with current maintenance contracts: Login to you account to access your Undelete 11 software

- Learn More: Watch Undelete 11 videos

- Purchase Info: Buy Now Online or Request a Quote

- Trialware: Download 30-day trial software

 

 

The Challenge of IT Cost vs Performance

$
0
0

In over 30 years in the IT business, I can count on one hand the number of times I’ve heard an IT manager say, “The budget is not a problem. Cost is no object.”

It is as true today as it was 30 years ago.  That is, increasing pressure on the IT infrastructure, rising data loads and demands for improved performance are pitted against tight budgets.  Frankly, I’d say it’s gotten worse – it’s kind of a good news/bad news story. 

The good news is there is far more appreciation of the importance of IT management and operations than ever before.  CIOs now report to the CEO in many organizations; IT and automation have become an integral part of business; and of course, everyone is a heavy tech user on the job and in private life as well. 

The bad news is the demand for end-user performance has skyrocketed; the amount of data processed has exploded; and the growing number of uses (read: applications) of data is like a rising tide threatening to swamp even the most well-staffed and richly financed IT organizations.

The balance between keeping IT operations up and continuously serving the end-user community while keeping costs manageable is quite a trick these days.  Capital expenditures on new hardware and infrastructure and Operational expenditures on personnel, subscriptions, cloud-based service or managed service providers can become a real dilemma for IT management. 

An IT executive must be attuned to changes in technology, changes in his/her own business and the changing nature of the existing infrastructure as the manager tries to extend the maximum life of equipment. 

Performance demands keep IT professionals awake at night.  The hard truth is the dreaded 2:00 a.m. call regarding a crashed server or network operation, or the halt of operations during a critical business period (think end of year closing, peak sales season, or inventory cycle) reveals that in many IT organizations, they’re holding on by the skin of their teeth.

Condusiv has been in the business of improving the performance of Windows systems for 30 years.  We’ve seen it all.  One of the biggest mistakes an IT decision-maker can make is to go along with the “common wisdom” (primarily pushed by hardware manufacturers) that the only way to improve system and application performance is to buy new hardware.  Certainly, at some point hardware upgrades are necessary, but the fact is, some 30-40% of performance is being robbed by small, fractured, random I/O being generated due to the Windows operating system (that is, any Windows operating system, including Windows 10 or Windows Server 2019. Also see earlier article Windows is Still Windows).  Don’t get me wrong, Windows is an amazing solution used by some 80% of all systems on the planet.  But as the storage layer has been logically separated from the compute layer and more systems are being virtualized, Windows handles I/O logically rather than physically which means it breaks down reads and writes to their lowest common denominator, creating tiny, fractured, random I/O that creates a “noisy” environment.  Add a growing number of virtualized systems into the mix and you really create overhead (you may have even heard of the “I/O blender effect”).  The bottom line: much of performance degradation is a software problem that can be solved by software.  So, rather than buying a “forklift upgrade” of new hardware, our customers are offloading 30-50% or more of their I/O which dramatically improves performance.  By simply adding our patented software, our customers avoid the disruption of migrating to new systems, rip and replacement, end-user training and the rest of that challenge. 

Yes, the above paragraph could be considered a pitch for our software, but the fact is, we’ve sold over 100 million copies of our products to help IT professionals get some sleep at night.  We’re the world leader in I/O reduction. We improve system performance an average of 30-50% or more (often far more).  Our products are non-disruptive to the point that we even trademarked the term “Set It and Forget It®”.  We’re proud of that, and the help we’re providing to the IT community.

 

 

To try for yourself, download a free, 30-day trial version (no reboot required) at www.condusiv.com/try

SQL Server Database Performance

$
0
0

How do I get the most performance from my SQL Server?

SQL Server applications are typically the most I/O intensive applications for any enterprise and thus are prone to suffer performance degradation. Anything a database administrator can do to reduce the amount of I/O necessary to complete a task will increase the server’s performance of the application.

Excess and noisy I/O has typically been found to be the root cause of numerous SQL performance problems such as:SQL Server

  • -SQL query timeouts
  • -SQL crashes
  • -SQL latency
  • -Slow data transfers
  • -Slow or sluggish SQL based-applications
  • -Reports taking too long
  • -Back office batch jobs bleeding over into production hours
  • -User complaints; users having to wait for data

 

Some of the most common actions DBAs often resort to are:

  • -Tuning queries to minimize the amount of data returned.
  • -Adding extra spindles or flash for performance
  • -Increased RAM
  • -Index maintenance to improve read and/or write performance.

 

Most performance degradation is a software problem that can be solved by software

None of these actions will prevent hardware bottlenecks that occur due to the FACT that 30-40% of performance is being robbed by small, fractured, random I/O being generated due to the Windows operating system (that is, any Windows operating system, including Windows 10 or Windows Server 2019).

 

Two Server I/O Inefficiencies



IO StreamAs the storage layer has been logically separated from the compute layer and more systems are being virtualized, Windows handles I/O logically rather than physically which means it breaks down reads and writes to their lowest common denominator, creating tiny, fractured, random I/O that creates a “noisy” environment that becomes even worse in a virtual environment due to the “I/O blender effect”.



IO StreamThis is what a healthy I/O stream SHOULD look like in order to get optimum performance from your hardware infrastructure. With a nice healthy relationship between I/O and data, you get clean contiguous writes and reads with every I/O operation.

 

 

 

 

 

Return Optimum Performance – Solve the Root Cause, Instantly

 

Condusiv®’s patented solutions address root cause performance issues at the point of origin where I/O is created by ensuring large, clean contiguous writes from Windows to eliminate the “death by a thousand cuts” scenario of many small writes and reads that chew up performance. Condusiv solutions electrify performance of windows servers even further with the addition of DRAM caching – using idle, unused DRAM to serve hot reads without creating an issue of memory contention or resource starvation. Condusiv’s “Set It and Forget It” software optimizes both writes and reads to solve your toughest application performance challenges Video: Condusiv I/O Reduction Software Overview.

 

Lab Test Results with V-locity I/O reduction software installed

labtest 

 

Best Practice Tips to Boost SQL Performance with V-locity

 

 

By following the best practices outlined here, users can achieve a 2X or faster boost in MS-SQL performance with Condusiv’s V-locity® I/O reduction software.

-Provision an additional 4-16GB of memory to the SQL Server if you have additional memory to give 

-Cap MS-SQL memory usage, leaving the additional memory for the OS and our software. Note - Condusiv software will leverage whatever is unused by the OS 

-If no additional memory to add, cap SQL memory usage leaving 8GB for the OS and our software Note – This may not achieve 2X gains but will likely boost performance 30-50% as SQL is highly inefficient with its memory usage

-Download and install the software – condusiv.com/try. No SQL code changes needed. No reboot required. Note - Allow 24 hours for algorithms to adjust.

-After a few days in production, pull up the dashboard and look for a 50% reduction in I/O traffic to storage

Note – if offloading less than 50% of I/O traffic, consider adding more memory for the software to leverage and watch the benefit rise on read heavy apps.

Thinking Outside the Box - How to Dramatically Improve SQL Performance, Part 1

$
0
0

If you are reading this article, then most likely you are about to evaluate V-locity® or Diskeeper® on a SQL Server (or already have our software installed on a few servers) and have some questions about why it is a best practice recommendation to place a memory limit on SQL Servers in order to get the best performance from that server once you’ve installed one of our solutions.

To give our products a fair evaluation, there are certain best practices we recommend you follow.  Now, while it is true most servers already have enough memory and need no adjustments or additions, a select group of high I/O, high performance, or high demand servers, may need a little extra care to run at peak performance.

This article is specifically focused on those servers and the best-practice recommendations below for available memory. They are precisely targeted to those “work-horse” servers.  So, rest assured you don’t need to worry about adding tons of memory to your environment for all your other servers.

One best practice we won’t dive into here, which will be covered in a separate article, is the idea of deploying our software solutions to other servers that share the workload of the SQL Server, such as App Servers or Web Servers that the data flows through.  However, in this article we will shine the spotlight on best practices for SQL Server memory limits.

We’ve sold over 100 million licenses in over 30 years of providing Condusiv® Technologies patented software.  As a result, we take a longer term and more global view of improving performance, especially with the IntelliMemory® caching component that is part of V-locity and Diskeeper. We care about maximizing overall performance knowing that it will ultimately improve application performance.  We have a significant number of different technologies that look for I/Os that we can eliminate out of the stream to the actual storage infrastructure.  Some of them look for inefficiencies caused at the file system level.  Others take a broader look at the disk level to optimize I/O that wouldn’t normally be visible as performance robbing.  We use an analytical approach to look for I/O reduction that gives the most bang for the buck.  This has evolved over the years as technology changes.  What hasn’t changed is our global and long-term view of actual application usage of the storage subsystem and maximizing performance, especially in ways that are not obvious.

Our software solutions eliminate I/Os to the storage subsystem that the database engine is not directly concerned with and as a result we can greatly improve the speed of I/Os sent to the storage infrastructure from the database engine.  Essentially, we dramatically lessen the number of competing I/Os that slow down the transaction log writes, updates, data bucket reads, etc.  If the I/Os that must go to storage anyway aren’t waiting for I/Os from other sources, they complete faster.  And, we do all of this with an exceptionally small amount of idle, free, unused resources, which would be hard pressed for anyone to even detect through our self-learning and dynamic nature of allocating and releasing resources depending on other system needs.

It’s common knowledge that SQL Server has specialized caches for the indexes, transaction logs, etc.  At a basic level the SQL Server cache does a good job, but it is also common knowledge that it’s not very efficient.  It uses up way too much system memory, is limited in scope of what it caches, and due to the incredible size of today’s data stores and indexes it is not possible to cache everything.  In fact, you’ve likely experienced that out of the box, SQL Server will grab onto practically all the available memory allocated to a system.

It is true that if SQL Server memory usage is left uncapped, there typically wouldn’t be enough memory for Condusiv’s software to create a cache with.  Hence, why we recommend you place a maximum memory usage in SQL Server to leave enough memory for IntelliMemory cache to help offload more of the I/O traffic.  For best results, you can easily cap the amount of memory that SQL Server consumes for its own form of caching or buffering.  At the end of this article I have included a link to a Microsoft document on how to set Max Server Memory for SQL as well as a short video to walk you through the steps.

A general rule of thumb for busy SQL database servers would be to limit SQL memory usage to keep at least 16 GB of memory free.  This would allow enough room for the IntelliMemory cache to grow and really make that machine’s performance 'fly' in most cases.  If you can’t spare 16 GB, leave 8 GB.  If you can’t afford 8 GB, leave 4 GB free.  Even that is enough to make a difference.  If you are not comfortable with reducing the SQL Server memory usage, then at least place a maximum value of what it typically uses and add 4-16 GB of additional memory to the system.  

We have intentionally designed our software so that it can’t compete for system resources with anything else that is running.  This means our software should never trigger a memory starvation situation.  IntelliMemory will only use some of the free or idle memory that isn’t being used by anything else, and will dynamically scale our cache up or down, handing memory back to Windows if other processes or applications need it.

Think of our IntelliMemory caching strategy as complementary to what SQL Server caching does, but on a much broader scale.  IntelliMemory caching is designed to eliminate the type of storage I/O traffic that tends to slow the storage down the most.  While that tends to be the smaller, more random read I/O traffic, there are often times many repetitive I/Os, intermixed with larger I/Os, which wreak havoc and cause storage bandwidth issues.  Also keep in mind that I/Os satisfied from memory are 10-15 times faster than going to flash.  

So, what’s the secret sauce?  We use a very lightweight storage filter driver to gather telemetry data.  This allows the software to learn useful things like:

- What are the main applications in use on a machine?
- What type of files are being accessed and what type of storage I/O streams are being generated?
- And, at what times of the day, the week, the month, the quarter? 

IntelliMemory is aware of the 'hot blocks' of data that need to be in the memory cache, and more importantly, when they need to be there.  Since we only load data we know you’ll reference in our cache, IntelliMemory is far more efficient in terms of memory usage versus I/O performance gains.  We can also use that telemetry data to figure out how best to size the storage I/O packets to give the main application the best performance.  If the way you use that machine changes over time, we automatically adapt to those changes, without you having to reconfigure or 'tweak' any settings.


Stayed tuned for the next in the series; Thinking Outside The Box Part 2 – Test vs. Real World Workload Evaluation.

 

Main takeaways:

- Most of the servers in your environment already have enough free and available memory and will need no adjustments of any kind.
- Limit SQL memory so that there is a minimum of 8 GB free for any server with more than 40 GB of memory and a minimum of 6 GB free for any server with 32 GB of memory.  If you have the room, leave 16 GB or more memory free for IntelliMemory to use for caching.
Another best practice is to deploy our software to all Windows servers that interact with the SQL Server.  More on this in a future article.

 

 

Microsoft Document – Server Memory Server Configuration Options

https://docs.microsoft.com/en-us/sql/database-engine/configure-windows/server-memory-server-configuration-options?view=sql-server-2017

 

Short video – Best Practices for Available Memory for V-locity or Diskeeper

https://youtu.be/vwi7BRE58Io

At around the 3:00 minute mark, capping SQL Memory is demonstrated.

SysAdmins Discover That Size Really Does Matter

$
0
0

(...to storage transfer speeds...)

 

I was recently asked what could be done to maximize storage transfer speeds in physical and virtual Windows servers. Not the "sexiest" topic for a blog post, I know, but it should be interesting reading for any SysAdmin who wants to get the most performance from their IT environment, or for those IT Administrators who suffer from user or customer complaints about system performance.

 

As it happens, I had just completed some testing on this very subject and thought it would be helpful to share the results publicly in this article.

The crux of the matter comes down to storage I/O size and its effect on data transfer speeds. You can see in this set of results using an NVME-connected SSD (Samsung MZVKW1T0HMLH Model SM961), that the read and write transfer speeds, or put another way, how much data can be transferred each second is MUCH less when the storage I/O sizes are below 64 KB in size:

 

You can see that whilst the transfer rate maxes out at around 1.5 GB per second for writes and around 3.2 GB per second for reads, when the storage I/O sizes are smaller, you don’t see disk transfer speeds at anywhere near that maximum rate. And that’s okay if you’re only saving 4 KB or 8 KB of data, but is definitely NOT okay if you are trying to write a larger amount of data, say 128 KB or a couple of megabytes, and the Windows OS is breaking that down into smaller I/O packets in the background and transferring to and from disk at those much slower transfer rates. This happens way too often and means that the Windows OS is dampening efficiency and transferring your data at a much slower transfer rate than it could, or it should. That can have a very negative impact on the performance of your most important applications, and yes, they are probably the ones that users are accessing the most and are most likely to complain about.

 

The good news of course, is that the V-locity® software from Condusiv® Technologies is designed to prevent these split I/O situations in Windows virtual machines, and Diskeeper will do the same for physical Windows systems. Installing Condusiv’s software is a quick, easy and effective fix as there is no disruption, no code changes required and no reboots. Just install our software and you are done!

You can even run this test for yourself on your own machine. Download a free copy of ATTO Disk Benchmark from The Web and install it. You can then click its Start button to quickly get a benchmark of how quickly YOUR system processes data transfer speeds at different sizes. I bet you quickly see that when it comes to data transfer speeds, size really does matter!

Out of interest, I enabled our Diskeeper software (I could have used V-locity instead) so that our RAM caching would assist the speed of the read I/O traffic, and the results were pretty amazing. Instead of the reads maxing out at around 3.2 GB per second, they were now maxing out at around a whopping 11 GB per second, more than three times faster. In fact, the ATTO Disk Benchmark software had to change the graph scale for the transfer rate (X-axis) from 4 GB/s to 20 GB/s, just to accommodate the extra GBs per second when the RAM cache was in play. Pretty cool, eh?

 

Of course, it is unrealistic to expect our software’s RAM cache to satisfy ALL of the read I/O traffic in a real live environment as with this lab test, but even if you satisfied only 25% of the reads from RAM in this manner, it certainly wouldn’t hurt performance!!!

If you want to see this for yourself on one of your computers, download the ATTO Disk Benchmark tool from The Web, if you haven’t already, and as mentioned before, run it to get a benchmark for your machine. Then download and install a free trial copy of Diskeeper for physical clients or servers, or V-locity for virtual machines from www.condusiv.com/try and run the ATTO Disk Benchmark tool several times. It will probably take a few runs of the test, but you should easily see the point at which the telemetry in Condusiv’s software identifies the correct data to satisfy from the RAM cache, as the read transfer rates will increase dramatically. They are no longer being confined to the speed of your disk storage, but instead are now happening at the speed of RAM. Much faster, even if that disk storage IS an NVME-connected SSD. And yes, if you’re wondering, this does work with SAN storage and all levels of RAID too!

NOTE: Before testing, make sure you have enough “unused” RAM to cache with. A minimum of 4 GB to 6 GB of Available Physical Memory is perfect.

Whether you have spinning hard drives or SSDs in your storage array, the boost in read data transfer rates can make a real difference. Whatever storage you have serving YOURWindows computers, it just doesn’t make sense to allow the Windows operating system to continue transferring data at a slower speed than it should. Now with easy to install, “Set It and Forget It®” software from Condusiv Technologiesyou can be sure that you’re getting all of the speed and performance you paid for when you purchased your equipment, through larger, more sequential storage I/O and the benefit of intelligent RAM caching.

If you’re still not sure, run the tests for yourself and see.

Size DOES matter!

Undelete Saves Your Bacon, An In-depth Video Series

$
0
0

Undelete® is a lot more than those simple file recovery utilities that just search through free space on Windows machines looking for recoverable data. Undelete does so much more; protecting files in network shared folders and capturing versions of any number of file types.

If you've ever had to rely on restoring from backup or a snapshots to get a deleted file back, watch now to find out how Undelete makes the recovery faster and more convenient on workstations, laptops and Windows servers.

Undelete, the world’s #1 file recovery software, as a first line of defense in your disaster recovery strategy can save your bacon!

“Undelete saved my bacon.”— Ken C, Cleveland State University

Why are some deleted files not in the Windows Recycle Bin?

Were you aware that the Windows Recycle Bin falls short of capturing all file deletions?

Whilst the Recycle Bin is very quick and convenient, it doesn’t capture:

· Files deleted from the Command Prompt

· Files deleted from within some applications

· Files deleted by network users from a Shared Folder

Undelete from Condusiv Technologies can capture ALL deletions, regardless of how they occur.

“It saved our bacon when a file on my system was accidentally deleted from another workstation. That recovery saved hours of work and sold us on the usefulness of the product.”

“Our entire commissions database was saved by the Undelete program. Very happy about that. We would have lost a week of commissions (over 2000 records easily). We were very grateful that we had your product." Frank B, Technical Manager, World Travel, Inc.

Watch this video for a demonstration of why the Recycle Bin falls short and how the Undelete software can pick up the slack and truly become the first line of defense in your disaster recovery strategy. 

What is Undelete File Versioning?

Have you ever accidentally overwritten a Microsoft Word document, spreadsheet or some other file?

Would it be helpful to have several versions of the same file available for recovery in the Windows Recycle Bin? Sorry, but the Recycle Bin can’t do that.

However, the Undelete Recovery Bin can!

“I'm glad I found yours -- it works very well, and the recovery really saved my bacon!”— John

Watch this video to see a demonstration of how capturing several versions of the same file when they get overwritten can really help save time as well as data.

Searching the Undelete Recovery Bin

Recover deleted files quickly and conveniently with Undelete’s easy search functions.

Even if you only know part of the file name, or aren’t sure what folder it was deleted from, see in this video how easy it is to find and recover the file that you need.

“I would recommend undelete as it has saved my bacon a couple of times when I was able to recover something that I deleted by accident.”— Joseph

Inclusion and Exclusion lists in Undelete

Find out how to use Inclusion and Exclusion Lists in the Undelete software to only capture those files that you really might want to recover and exclude all of those files that you don’t really care about.

Have you ever needed to get a file back that was deleted during a Windows Update? Probably not, so why have those files take up space in your Recovery Bin?

“It saved my bacon a few times.”— Jason

Watch this to see how configurable the Undelete Recovery Bin is.

Emergency Undelete Software

See a demonstration showing how easy it is to recover deleted files, even BEFORE you install the Undelete software from Condusiv Technologies.

Prevent that awful moment of extreme realization when you delete a file that isn’t backed up.

Oh! And if you’ve found this page because you need to recover a file right now, click here to get the free 30-day trialware of Undelete. We hope this helps you out of the jam!

“It has saved my bacon a couple of times when I was able to recover something that I deleted by accident.”

How to safely delete files before recycling your computer with Undelete

Want to get a new computer, but worry what would happen to your personal data if you recycled your old one, or sold it?

Watch now to see how to securely wipe your files from your computer’s hard drives with SecureDelete®, which is included in the Undelete software from Condusiv Technologies, before recycling your old computer, selling it, or passing it on to a friend.

We hope these videos help you navigate Undelete like a pro, and perhaps save your bacon, too!

Watch the Series - here!

Viewing all 59 articles
Browse latest View live




Latest Images