Nexenta Blog

How Does Software Defined Data Storage Equate to Savings?

02 Dec 2013  by Nexenta

Ask any CIO what their greatest concern is, and they’ll invariably come up with some variation of concern over the budget. It’s a Catch-22 for many businesses when it comes to technology: There is always a faster, more reliable option, but it always comes at a cost. So how did the big guys get to the top? How have they learned to strike a balance between cost and effectiveness without compromising either entirely? When it comes to data, more and more have chosen to look at software defined data centers. Here’s why:

There’s no question that data is growing exponentially as we frequently use technology for every aspect of our lives – from shopping and paying bills to reconnecting with friends on social media. All of these things produce data – a lot of data in fact. Studies have estimated that we create 2.5 quintillion bytes of data each day. All of this data needs to be stored, and it can be quite costly. Some organizations are spending as must as 40% of their IT budget on storage solutions.

Therefore, government agencies, credit card companies, health care facilities, social media sites, retailors and many other entities are constantly looking for ways to store this data efficiently and affordably. Software Defined Data Centers (SDDC) address all of these concerns and more because they take the virtues of virtualization and apply them to data storage.  In fact, our customers have reported that they have saved as much as 75% in storage costs.

Here are just a few of the ways businesses can save using software defined data storage:

Scalability: Did you know that 90% of the data out there has been created in the last two years? Just think about what this could mean in terms of data storage ten years from now. As data grows, more and more hardware must be purchased to keep up with demand. Software defined data storage solutions are more easily scalable, with greater savings.

Operational Cost Savings: We talked about scalability, but what about the problems that traditional storage methods present in regard to operational costs? Energy costs to cool the storage system, labor costs to monitor and troubleshoot the system, and maintenance costs are just some things that are negated through a software defined storage system.

Open Source Opportunities: Hardware storage systems are closed systems. You are bound by specific vendors, which could limit your flexibility and opportunities to adapt when needed. Because software defined data storage solutions are often open-source, you can take advantage of the latest technologies and adapt your storage solution for the lowest cost.

For more information about the benefits of software defined data storage solutions, contact us today.


VMware View Acceleration on Display

09 Oct 2013  by Nexenta

As VMworld Europe 2013 approaches, Nexenta continues to drive home the advancements in application-centric storage with Nexenta VSA for VMware Horizon View.  As the demo for the show is completed and desktop pools are created and tested, it is always exciting to see some independent test done and presented.  Just ahead of VMworld, VMware’s EUC team posted a blog detailing the results of their testing of Nexenta’s VSA for VMware Horizon View .  With a 72% increase in desktop density and a 38X reduction in physical SAN traffic, VMware found VSA for VMware Horizon View to be key to a successful VDI rollout.  These performance statistics are not just for show.  The reduction in traffic and increased density does not just help the balance sheet, but it can help stalled deployments move forward.

“With VSA for Horizon View, Nexenta has introduced an amazing product that unlocks outstanding user experience at a low TCO and makes it possible to recover stalled deployments without requiring a disruptive and painful rip and replace scenario.”  -John Dodge, VMware

If you would like to see this technology in action for deployments, or performance metrics, or the acceleration it provides, make sure to come by the Nexenta booth (Hall 8, S-300) at VMworld Europe.


Nexenta Systems-Powered Storage Solution Achieves 1.6 Million IOPS

30 Sep 2013  by Nexenta

Nexenta has achieved 1.6 million IOPS (Input/Output Operations per Second) and high-availability with no single point of failure. Comparable solutions from proprietary vendors cost significantly more than the Nexenta and Area Data Systems solution and cannot guarantee high-availability. With the combination of Nexenta’s Software-defined Storage, NexentaStor™, and high-performance, all-flash hardware, there is now a clear enterprise-class alternative to meet the scalability demands of big data.

“Our customers can now reach well over one million IOPS and capitalize on big data opportunities without breaking the bank on proprietary storage technologies that cost hundreds of thousands of dollars,” said Bridget Warwick, Chief Marketing Officer, Nexenta Systems. “This is further proof that Nexenta’s Software-defined Storage is changing the economics of the enterprise storage market.”

Nexenta is demonstrating the 1.6 million IOPS storage configuration at Intel® Solutions Summit 2013 from March 19-21, 2013 in Los Angeles, Calif. Nexenta is a Silver Sponsor and will be at its booth in the storage zone to discuss the enormous opportunity for Intel channel partners to drive ideal storage solutions, powered by Nexenta, to their customers.Architecture recipes using Nexenta and Intel products are listed on Intel’s website at: http://www.esaa-members.com/recipes/advSearchList/182.


A Few Impressions from Dell’s Banking Day in NYC

30 Sep 2013  by Nexenta

Three Somewhat Surprising Trends

Contributed by Evan Powell, Chief Strategy Officer, Nexenta Systems

Back in May,  I was thrilled to discuss Software Defined Storage at Dell’s banking day in their offices in One Penn in NYC. I was one of two guest speakers, the other was Gartner’s Joe Unsworth who did a great job outlining the transition to flash-based storage. After our fairly brief presentations and some Q&A there was an open round table discussion. The attendees were a who’s who of global financial IT leaders including CIOs and VPs of technology and storage of most “too big to fail” banks; we had a couple of already highly referencable customers in the audience as well which was great. A friend at Dell estimated that the collective IT capital purchases of the attendees were approximately $20-$30bn per year. I cannot thank Dell enough for the opportunity and for the partnership.

As an aside – I think all of us in IT owe Dell a debt for their willingness to shift towards enterprise and towards a vision of enterprise IT that, for me, is more compelling, more open, and much more dynamic than many legacy system vendors from which Dell is rapidly taking market share. Maybe I should blog sometime soon about why we are Dell fans – I’d welcome the input of folks that read this blog. For now, suffice it to say that I think Dell is doing a good job leveraging their strengths including supply chain management and global support to both enable and benefit from the ongoing re-platforming of IT. Yes – I am biased since Dell recently started paying their sales teams on NexentaStor – so take those comments with a grain of salt. On the other hand – we targeted Dell as a preferred tier one vendor because they are so well positioned so our money and focus is where our mouth is.

The nature of the Banking Day conversations is that they are closed door and vendor neutral. I did not try to sell Nexenta’s products or even the Dell hardware and services we leverage to deliver software defined storage. Instead I tried to kick off a real conversation.

Here are a few observations. First – some comments and themes I expected and then 2-3 really surprising comments.

As expected, these buyers are more interested in agility than they are in cost savings. And, with one or two exceptions, they assented freely to the notion that legacy storage is done, finished, a thing of the past; it feels like the transition to a software defined data center is just the straw breaking the legacy camel’s back.

Perhaps most surprising to me were a few items:

  1. Increased recognition of the inevitability of cloud-based approaches. I’ll call this acquiescence #1. Many financials have been fighting the easy on-ramp of AWS for years as they struggled to get their thousands of developers to keep their IP on premise and protected. There seems to be a sense that only by building a better, safer, more performant and massively easier to deploy and manage IT platform could they attract developers to stay within the enterprise. I sensed a lot less willingness to fight their own users than in the past and much more confidence in their ability to deliver a better solution that will retain users.
  2. Acquiescence #2 – BYOD is here to stay. Again, maybe I’m just out of touch however RIM and blackberry rose to prominence in part because of the mandates of buyers (and their colleagues in the government). And now the iPad and Android devices and similar are a fact of life that Software Defined Storage and the rest of the IT has got to accommodate.
  3. Nobody believes today’s all flash landscape will be with us in 18 months. Here I may be stealing Joe and Gartner’s thunder slightly. Suffice it to say that he presented a fairly provocative view of likely changes and everyone agreed that today’s apparent leaders are unlikely to win longer term. Hybrid players like Nexenta-based solutions and Nimble did receive more support.

I’d be remiss if I didn’t point out one final acquiescence which may be why the event was so well attended – I think there is more uncertainty over the fundamental structure of IT than I’ve seen since I first startedpartnering with and selling to these buyers 10-15 years ago. The storage teams feel like they are under threat – and they are. In a way it is similar to what I experienced when building Clarus Systems (now Riverbed) and the voice teams were realizing that voice and video convergence with the IP networks could mean “career convergence” as well. As the software defined data center progresses, you’ll see much more need for a true DevOps mindset and skill set. Service engineering is now the hot commodity and folks that know a particular silo really well are increasingly being flanked by those that build IT platforms that deliver on the agility promised by software defined data centers.

Hopefully these few nuggets are of interest. All in all, it is tremendously exciting to see some of the most credible and financially powerful IT buyers and partners (again – thank you Dell!) assent to the notion that software defined storage has got to happen for IT to remain relevant and to deliver on the promise of a more agile platform. I learned a lot from the conversations.


Congratulations to EMC!

30 Sep 2013  by Nexenta

Congratulations to EMC and their software teams for announcing ViPR. Since we have been selling software defined storage for a number of years – and now have many more times customers than Vmware did when EMC bought them (and more than 10x than 3PAR when they went public for example) – I take exception to the lead in the press release proclaiming ViPR as “the world’s first Software Defined Storage platform…”

Nonetheless, ViPR appears to be a real step forward towards software defined storage. And EMC deserves a lot of credit for again showing a willingness to risk aspects of their core business in order to keep up with customer requirements.

If you are one of the folks to read this blog regularly, you know we have shared a simple definition of SDS. You can read more about it here. Our definition is based on countless discussions with our cloud and enterprise customers who have shared with us why they started down the journey to software defined storage in the first place.

Basically it is 1) Abstract away the underlying hardware. 2) Achieve flexibility through the ability to handle multiple data access methods and data types. 3) Be truly software defined – through an architecture and set of APIs that allows, for example, orchestration software to manage the storage and to determine to what extent it is meeting application requirements.

If you look at what we know about ViPR – I think it is software that is policy driven that delivers object storage and that also manages and possibly virtualizes block and file storage. I gathered this especially from the more detailed write up over onEFYTimes.

It’s difficult to glean much from a press storm and I know that things will be much clearer once we see more detail from EMC and customers but let’s look at early indications of how ViPR might shape up based on those criteria.

  1. Abstraction
      • ViPR: ViPR does not, it appears, add a consistent set of storage management capabilities over any hardware – it exposes and manages those that are already available on the hardware. If you are on an array with snapshots – congratulations, you’ve got (some sort of) snap shots. On the other hand if you are on a JBOD, no luck. Additionally, of course, ViPR does not open up the on disk format as it is generally not in the data path. This means vendor lock–in remains and arguably increases as ViPR hooks into your Vmware environment.
      • NexentaStor: Conversely NexentaStor runs on any hardware, including high performance SSDs to deliver caching, and of course JBODs and does deliver that consistent set of capabilities irrespective of the underlying hardware. But – NexentaStor really prefers JBODs to legacy storage arrays and it is extremely likely that ViPR will be better able to manage heterogenous storage arrays, especially those from EMC, than NexentaStor does; NexentaStor can virtualize them but is not aware of their underlying capabilities in a way that ViPR will be.
  2. Achieve flexibility. The basic difference is that NexentaStor is broader and more flexible that we think ViPR will be when it ships thanks, again, to controlling everything from the on disk format to the access methods. On the other hand, while Nexenta has sponsored open source object approaches we are not shipping today a object storage solution whereas ViPR will include object. Whether we will ship object by the time ViPR ships is yet to be seen.
      • A lot depends on to what extent ViPR can actually virtualize the underlying resources by combining them into pools that include SSDs; NexentaStor has this ability today which is why we have partners shipping JBODs with cache achieving 1 million IOPS and more. On the other hand, the promised capability of ViPR to turn object into file and vise–versa could be important.
      • I am hopeful that in this area ViPR will be a massive step forward vs. legacy arrays which are essentially black-holes for your data, each requiring a different set of expertise to manage and built to address a different silo of data.
      • What needs to be seen is how ViPR will handle putting the right data on the right underlying array. Whereas with NexentaStor the configurations themselves, such as the block sizes used to write the data disk, are themselves variable in the case of ViPR the software has make sure that, for example, video files needed for streaming are stored on underlying Isilon arrays whereas structured data like Oracle remains on VNX and presumably high random I/O workloads from larger cloud and Vmware deployments are served from XtremeIO.
  3. Being software defined this is arguably the most vague section of our fairly vague definition of software defined storage.Today, however, IF ViPR is routing data sets based on application requirements to the right underlying array – per the point above – than it may well have the architecture necessary to close the application management loop. By comparison, NexentaStor can absolutely eliminate the need for deep storage engineering with solutions like VSA for VDI. In this solution the customer must simply enter the number and type of desktops and NexentaStor – with integration code for VDI – does the rest AND, crucially, tests and manages the system to insure that the requirements are being met.
      • Nexenta, however, built the VSA for VDI business logic in part in hopes of seeing others in the industry run with the task. Arguably orchestration solutions like aspects of OpenStack and CloudStack and even VMTurbo should pick up the baton if they are truly going to be the brain inside the software defined data center. It may be that EMC with ViPR and of course Vmware will lead the industry in creating an open approach to characterizing application requirements and using them to simplify management.
      • Please note – plaintive request – what the storage layer really needs is something like the recently announced Project Daylight from IBM, Cisco, Juniper and of course the Linux Foundation. I think even Nicra / Vmware / EMC is joining that effort to open up the control layer. Read more about Project Daylight here
      • In the meantime, Nexenta’s upcoming Metis utility – which ties application logic to details like pool configurations – is growing in value and importance with integration into our and our partner’s Salesforce for example and ServiceNow and other management solutions in the future. However, again, Nexenta cannot be the business logic of a software defined data center on our own. The industry needs to come together here and maybe ViPR will be a catalyst to make that happen.


Pages