How to fix legacy software storage issues, availability and budget

In this 3-part series, we talk about the ten most common data storage pain points in large enterprise environments and explain several ways to overcome them. In Part 1, we explained how data storage pain can be assessed using the same universal pain scale that medical professionals use to determine patient treatment. Then we discussed each of the first three problems in the top ten: storage capacity, performance and scalability.

Dealing with legacy software, lack of availability and storage budget issues.

1. problems with legacy software - outdated systems affect user performance.

Contrary to what they may tell you, large incumbent storage vendors are not risk free. Large legacy code bases are often full of untested code and lurking problems. Support is difficult to scale. These two things combined can lead to frustrating support engagements with these vendors.

Of course, you want to highly value customer support - if you have tight deadlines or large data sets, look at their track record of helping resolve issues with other customers.

Less obviously, you should examine the technical organization. Do they use modern software development techniques? How do they feel about testing? How do they publish software? What is their track record for releases? This should play a bigger role in the minds of storage decision makers than it currently does. Storage is a long-term relationship, and engineering is an important factor in the value of that relationship in the future.

Don't be afraid to investigate software development at your storage vendors. Talk to existing customers about how accurate the engineering roadmap has been and how good the release record is. Talk to executives about vision and direction.

Measure your projected needs against the vendor's roadmap. You will put your crown jewels on the storage system you buy, and that system will be around for a long time. Your chosen vendors need to be moving in the same direction as you.

2. storage availability issues - storage is not resilient and occasionally fails, impacting productivity.

When data is not available, work stops. There is a work stoppage, and this is even more true if you have a large team of creatives or technicians who are blocked by unavailable storage.

With a monolithic system, availability can be tricky. You often have to buy two systems and then add a software layer that can move a workload between them in case of failure. You should carefully consider the cost of adding redundancy to a monolithic system and make sure the insurance policy is worth it.

If your downtime costs are very high, it may be worth buying two systems for redundancy reasons alone. However, if your downtime costs are flexible or low, the extra system may not be worth it - so think about that when considering a backup.

Another option, a compromise of sorts, is to look for a solution with a higher-value service contract with larger SLAs. With a scale-out system, a single node failure won't cause the entire system to fail, so you have some protection in the architecture - but a cluster can still fail due to a network problem. In either case, you have a backup for recovery and business continuity. There's always a lot of value in buying two.

3. memory budget hurts - memory is always too expensive.

As we all know, money is not infinite and storage is not free, and even "free" software requires hardware and engineers to run it. Storage capacity comes at a cost, and that cost is always considered too expensive. Too often I find that people get stuck on dollars per capacity and ignore other important things, such as dollars per end-to-end, dollars per iops, startup costs, or data center load (accumulation of technical debt). Below are some suggestions to help you manage and control storage costs.

Use the right storage technology for your workflow. Using flash when you don't need it is a waste of money. Using disk storage when you need flash just doesn't work. Consider hybrid storage systems that intelligently combine flash and disk, as they will help you on both the performance and budget axes.

With that in mind, use the cloud where appropriate. it's not a panacea. It's unlikely that a stationary workload will result in cost savings if you move it to the cloud. However, a very heavy workload or a very occasional workload may benefit the bottom line if you can use a public cloud or hybrid cloud system.

You'll want to engage integrators or VARs - they spend a lot of time talking to vendors and understanding the market, and can add value if you are evaluating storage systems or complete workflow solutions. Don't stand for VARs that don't add value!

You will begin a buying process based on your previous storage experience, and things have changed dramatically - what you knew about storage a year ago probably doesn't apply today. Do your due diligence if you are planning a new storage purchase or facility expansion.

Look at the storage efficiency of modern file systems: Software optimized to store more in less rack space and use less power - helps reduce costs while running a greener data center.