Wednesday, November 2, 2016

Why can't they get it straight

So I really don't blog much about work as I am always busy presenting to customers or partners about servers, storage or virtualization. But I saw a post about HPE Hyper Converged products and wanted to respond; as there were several inaccuracies in this blog.

First, the views and comments that I make here are my own.  They are not those of my employer. 

Second, any mention of Nutanix or their documentation is found on public sources and is the property of Nutanix. And any references to other company’s products are the property of those companies. Any references that appear in my posts are the credit of the source(s).

Full disclosure: I work for HPE in the Hyper Converged team and often discuss the differentiators between products when clients want to learn about our products.  I am a 27 year veteran of IT, with a strong background in servers, storage and virtualization.

I recently read a blog post by Steve Kaplan the Nutanix CTO, on a comparison between Nutanix’s products and Hewlett Packard Enterprise Hyper Converged products (  
Let me first say that I respect the author of the blog, but his inaccuracies sway the reader to think that Nutanix’s products are the best in the market place.  They are not. 

I am not writing this post to bash a competitor.  I am simply responding to Steve’s post and to correct oversights in the post. And… to point out the differentiations between the Nutanix and HPE products in this space.

In Steve’s post, he states:

For an organization of HPE’s size to lavish all of this attention on the vastly smaller Nutanix is quite remarkable. Besides its $50B in annual sales, HPE has 240,000 employees and 145,000 distributors, resellers and alliance partners around the world. Yet it clearly sees Nutanix as not only a threat, but as the industry standard against which it measures itself.

Steve…you hit the nail on the head.  HPE is a large company.  But we focus on a lot of products and technologies. Speaking as someone who has worked for several large channel partners and manufacturers; as an industry standard, battle cards & competitive information is gathered to help employees and partners understand the products and differentiations between vendors.  So when you say we’re focused on Nutanix…well we’re focused on the market space and providing best in class information & solutions to that space. 

By the way… the direction that Blockbuster went was because of the short sighted management they had.  I don’t think that HPE is lacking vision or creativity as in the case of Blockbuster.  Our CEO has a future for HPE that is clear and defined.  I believe that it will allow HPE to bring hybrid solutions more to the intelligent edge for a revolution of IoT. And bring world class services / real world consulting to our clients as they look to a shift of hybrid IT. You can learn more about our direction by watching the Securities Analyst Meeting at SAM 2016.

HPE vs Nutanix

Our clients can use our publicly available Total Cost of Ownership calculator which will give clients a fair analysis of TCO for our converged and hyper converged solutions. It can be found here  

As far as cost is a concern, with regard to Steve’s statement below:

More importantly, costs are only relevant if the products are at least somewhat equivalent – a premise I hope to quickly dispel.

He never really addresses the issue.  He creates a table of semi factual bullet points and provides columns to show Nutanix and HPE.  This clearly did not show anything about costs.  But since he brought it up.  Let’s look into that. 

HPE markets five (5) 9’s of availability for the HPE hyper converged products, using network RAID 10 in the StoreVirtual VSA (HPE’s software defined storage (SDS) or data fabric.  In order to achieve this with a Nutanix solution, the redundancy factor must = 3 (or RF3).  That requires at least 6 nodes in a Nutanix solution, and it may have less usable storage than in a HPE solution of the same number of nodes. Further, because the HPE StoreVirtual SDS solution is protected against disk failures, when a drive fails it does not take down a node or reduce performance of the cluster or node.  This cannot be said of the Nutanix solutions.

HPE has had a long history and proven sales record of having the best-selling 2U, 2 socket server for 19 consecutive years. 


They are important to clients who run mission critical applications in all verticals.  It is equally important to the SMB customer who needs to ensure that his infrastructure is protected from a failure.  This is why our industry has developed technologies like hot swappable power supplies, disk drives and blade servers.  So that the chance of having any failure is already limited and when something does break, it doesn’t take the business down. It fails over to the surviving device(s) in the topology of the solution.

There is nothing “legacy” about the StoreVirtual VSA or how HPE uses it in the hyper converged product. In fact, HPE has been doing hyper converged before it was a product category.  The HPE Blade System C7000 is hyper converged.  We provide compute, networking and storage in a single enclosure.  And in my opinion, was really the first hyper converged solution on the market place (circa 2005 / 2006).

What was missing from the Blade System solution? A Software Defined Storage fabric and a simple interface that could be used to provision virtual machines without the complexity of vCenter. VSA and the HPE UX (User Experience) are the foundations of HPE’s hyper converged solutions today.  

The comment that HPE uses the software obtained thru an acquisition in 2006 that has the same underlying issues as a traditional storage arrays is …well… crazy.  This solution was being used (and in some cases is) by many of the hypervisor vendor’s right up to their release of their own products.  StoreVirtual VSA was the first, purpose built, IP based SAN clustering for hypervisors solution available in the market. And the idea that it’s not VM centric?  The product is a VM itself and is supported on ESX, Hyper V, KVM and bare metal hosts.  Which gives customers flexibility and choice; including the hardware platform.

Yes, HPE has a long line card of products…It called a portfolio of products or a full line of products (servers, storage, networking, software) that HPE has to meet the needs of the idea economy for the clients we serve. And to provide clients new solutions for the sea of IoT and the data that it will bear. It should be noted that outside of the Nutanix circle, IT professionals who run large data centers clearly understand that Hyper Converged solutions from you or anyone else do not fit all enterprise scale requirements as Nutanix likes to indicate.

So I have taken Steve’s chart and put it into the same format as he did, but with an extra column.  The column labeled HPE Response has been added to provide the “real facts” about HPE’s hyperconverged products.  Take a look for yourself and give me some feedback. 

Remember that these are NOT official HPE responses, they are my own thoughts to Steve's post. 

Nutanix understanding of HPE Hyper Converged as published on Steve Kaplan’s blog
HPE Responses
True Differentiated Models (e.g.   High density, compute heavy, storage heavy, storage only, GPU support, etc.)

Mix & match models in same cluster

Customer choice of  hardware platform (Super Micro, Dell, Lenovo and Cisco UCS)

2 platform options
Cannot mix & match in  the
same cluster
Nodes in cluster must be identical
No storage only nodes Only available on HP hardware
Yes. Two world class hardware platforms.
The number # 1 selling, worldwide server for 19 consecutive years, shows that HARDWARE DOES MATTER to customers.

Customers have choices of processors, memory and NIC’s.  They can also mix and match processors in the same cluster.

Customer can choose to add storage to any hypervisor cluster at will, including using the Store Virtual products or Store Easy.

The approach is:  keep it simple by building homogeneous clusters thereby eliminating performance issues caused by mixing different CPU, memory and storage resources in a cluster.

NTNX has 3 pages of restrictions on the “ability to mix and match models in the same cluster”.  NTNX HW Admin Guide, AOS 4.7, pg. 9.

Security-first design Full-stack security development lifecycle
Many security certification

This cell intentionally left blank
The HC line of products can be integrated with LDAP or AD, providing customer with choice in securing the solution.

As part of the hyperconverged solution set customer can use our Store Easy products for integration with Microsoft AD Rights Management Service (RMS), Branch Cache, DFS-Namespaces and support of SMB 3.x.

In addition, Store Easy delivers file level encryption, deduplication, and compression which are part of Windows NTFS. And we can supply hardware based encryption using our SMART Array technology.

Web-scale infrastructure

Scalable distributed system
Al l  intelligence in the software
VM-centric approach
Self-healing system
API-driven automation and rich analytics
Store Virtual (Left Hand)

Shared namespace

Still uses traditional storage constructs: LUNs, Volumes, RAI D Groups, etc.

Snapshots and Clones at volume level

Replication is at volume level

Nothing in the system is VM-centric

Maintains some of the issues with traditional storage arrays

The HPE hyper converged products offer the highest availability starting with 2 nodes which includes a full licensed Store Virtual VSA for snap shots and replication.

We invented the VSA and it’s the most mature product in the SDS product space. It provides customers the flexibility to use vSphere or Hyper-V as their hypervisor of choice.

The solution is built on top of the hypervisor so it provides the portability that customers want, without vendor lock in.


  • StoreVirtual = Enterprise grade data management.
  • NTNX- 3000+ deployments
  • StoreVirtual – over 200,000+ deployed

HPE’s VM Explorer is a great tool for backups. HPE VM Explorer is a low-cost, easy-to-use and reliable VM backup solution. Within minutes you can start centrally managing your backups to disk, tape and cloud through an intuitive and easy-to-navigate web interface – no product training required! Advanced server backup capabilities include incremental backups and replication, snapshot integration and native cloud support for leading Cloud platforms. With instant VM recovery, direct file level restore from the cloud, encryption, and verification, VM Explorer delivers resiliency, efficiency and agility in your virtual environments. Read More.

HPE also supports 3rd party products like VEEAM, an industry standard VM backup and recovery tool with cloud management platform integration.

Performance - Data
Data Locality

Maintains data local to the VM/application

When VM/app moves, new data is written locally and old data is re-localized only on read.

Network efficient  design

No Data  Locality

Data is simply distributed across the cluster.

No data locality.

Not a  network friendly design
Yes. Data is in the clustered storage array. That’s how we protect it.  But let’s be specific since our competitors don’t understand how it works.

VM reads and writes are fully distributed across all nodes in the cluster.

Since we don’t move the vDisk during a vMotion or live migration, the data is always where the VM lives.

The VSA solution is an iSCSI based solution.  To say it’s not network friendly… well it’s a little silly to make that statement.

First, we should note that the NTNX EULA prohibits and performance data being published or discussed, so how can they state they are performant? What is their basis? What are they comparing it to?

Second, a single disk failure results in data being copied cross the network to re-protect that data to RF2 or RF3. This can have the side effect of impacting I/O to & from the VM’s itself.

Not an efficient use of network resources at all. 

Let’s be clear on jargon:

NTNX: “Re-localized “ = COPYING data over the network.

That network pipe is going to be awfully busy!  Because the data has to be recovered over the network.

In HPE’s Hyper Converged solution, should a drive fail, the drives are hot swappable; the storage is protected via the SMART Array controller and data is recovered within the node; rebuilt from the parity of the drives in the RAID set; NOT COPIED FROM OTHER NODES OVER THE NETWORK.

Resiliency and Data Protection
Nutanix uses distributed data protection defined by replication factor (RF2/RF3). System is designed to be self-healing and maintains resiliency and performance through failure events.

SSD or HDD Failure offline  drive
Parity rebuilt throughout cluster
Larger the luster, faster the recovery
Typically min rather than hours/days
Don 't need to replace drive until capacity
needed = no fire drill
SSD/MDD replacement t adds  capacity back to the cluster
Combination of local hardware RAI D and Network RAID
(RAID is an archaic technology
invented in 1987 designed when HDDs had very small capacities)

·       Rebuilding 4-6TB can take 20 + hours on a loaded system

·       Performance during rebuild is  significantly impacted
·       RAID + RAIN has significant overhead

Larger cluster size doesn't increase rebuild performance
Well when you have the #1 selling server for 19 consecutive years that includes the SMART Array technology embedded; it proves that HARDWARE DOES MATTER and customer know this!

Sure RAID technology has been around for years. It the reason that we have hot plug / replaceable disk drives. In fact, HPE owns the patent for RAID 1 which was developed by DEC and patented in 1983.  So we know a little about the subject.  Which is what’s used in the HC380 and HC250; along with RAID 5.

We believe that redundancy technologies should be applied to any solution that runs critical workloads.  Why take a performance hit or outage because a disk failed; when you can protect against that failure!

Would you buy a server with one power supply? NO. That is why we use RAID.

Remember that in the early years of RAID…some people referred to it as “Redundant Array of Inexpensive Disks” not Independent Disk.
In IT we all know that disks fail. We have developed the technology to give us pre-failure notification of an impending failure to avoid down time or outages. This is why SMART (Self-Monitoring, Analysis and Reporting Technology) is part of every drive we deploy.

Store Virtual is built on data protection technology that provides 99.999% availability that remains online even after multiple drive and/or node failures.

Nutanix requires you to make a choice between data availability and storage efficiency. RF2- only one drive or node can fail in the entire cluster. RF3- can sustain 2 failures but now only has 33% usable capacity of the raw storage purchased.

Rebuild in minutes? ...Well only if there is very little data in your cluster and on that drive being rebuilt… and no workloads pounding the cluster!

The problems of a larger cluster size increasing rebuild performance are: there is more data being copied inefficiently over the network because of a drive failure. All of nodes I/O is now negatively impacted due to the failure of a SINGLE drive.

HPE isolates the impact to the node by utilizing the Smart Array controller of that node for the rebuild. Keep it simple.

Data Efficiency
Distributed global deduplication

l n-line and post-process compression

Erasure Coding across nodes
No deduplication
No compression
No erasure coding
CS250 has 28.8TB RAW
Effective capacity -11.7 TB (4-node  appliance)

In the Nutanix Bible (taken from here …which is a public document); you can find the truth about their product.

They claim that they can provide better storage utilization by using Erasure Coding…but at a cost to the customer in hardware, software, licensing and support. 

On page 78 of their bible, in the Pro Tips section, it states that;

It is always recommended to have a cluster size which has at least 1 more node than the combined strip size (data + parity) to allow for rebuilding of the strips in the event of a node failure. This eliminates any computation overhead on reads once the strips have been rebuilt (automated via Curator). For example, a 4/1 strip should have at least 6 nodes in the cluster. The previous table follows this best practice.

6 Nodes to have Erasure Coding… to provide better storage utilization and protection?

The HPE HC solutions can start off at 2 Nodes and provide five (5) 9’s of availability.  That’s 4 nodes less of hardware, software, licensing and support.

While the Store Virtual VSA has dedup and compression within the current shipping product… it is my understanding that it will be added to the HC products sometime in 2017. 

What Nutanix doesn’t tell you is that you need these “Storage efficiency” features to reserve free space in the cluster. This is necessary to prevent the cluster from going down when there are multiple failures since there is no RAID protection of data. And, they require a more expensive license to use those features in order to ensure you can protect your cluster from going down.

And why is this any better?

NX3460-G5 (2x800GB SSD, 4x1 TB HDD) 22.4 TB raw- using RF3 = 5.4 TiB effective!! (

Intelligent Data Tiering

Flash contributes to the capacity of the datastore.

Not optimized for flash

Store Virtual can't tell the
 difference between an
SSD and a HDD

Manually configure tier  0

Manually configure tier 1

Enable Adaptive Optimization
NTNX is just another “me too”.

Nutanix is just flat wrong!

Adaptive optimization provides automatic intelligent tiering across the SAS and SSD tiers. No manual intervention required. In addition to hybrid arrays, we also have all spinning and all flash arrays available too.

Flash can be the entire data store or we can use a hybrid of SSD and HDD to provide performance and choice

The storage is configured at the hardware RAID level for redundancy and the VSA uses Adaptive Optimization to provide automatic data tiering to keep frequently used data on SSD’s.

This is configurable on a per volume bases to give customers greater control and choices on what data may use this feature.

BTW- NTNX likes to use old documentation from competitors to say things are “manual”.


No Nutanix cluster node limits
Max 16 nodes for HC380 Max 32 nodes for CS250
HPE HC products can support clusters with more than 16 nodes in the HC380 product line. 

HPE believes data protection is paramount. A solution built on 2rd tier hardware (like Nutanix) has no experience operating mission critical workloads for Fortune 100 companies; so of course they would say the scalability is unlimited.

Nutanix recommends that RF3 be used in any cluster greater than 16 nodes. Their customer are free to build out clusters with exceptionally in-efficient storage.

How available is that 100 node cluster when 2 SATA HDDs fail and they said you could use RF2 to protect your data?? 

Cluster goes down and ALL of your VMs would / will be offline.

Multi-Hypervisor  Support
ESXi, Hypcr-V, AHV
ESXi for HC380
ESXi and Hyper-V for CS250
NTNX has built a proprietary hypervisor based on KVM in order to lock-in customers. HPE supports the latest industry standard hypervisors in order to meet the open needs; for the broadest requirements of our customers.

Datacenter Management
Unlimited number of nodes in a cluster and unlimited number of clusters can all be managed from the same Prism Interface

1 -click upgrades OS, Firmware, hypervisor
1-click infrastructure management

Storage Management

Cluster Management

VM Management

Network Management

1 - click remediation
Storage Management
Cluster management
VM Management
Network Management

1 -click operational insights
Storage Management
Cluster  Management
VM Management
Network Management
Up to 32 server nodes can be clustered as a system and managed from the same user interface.

New U I for HC380 is  simple, but it is also very basic:

Limited upgrade simplicity
Multiple clicks for OS/Firmware
Hypervisor still manual
Manual downloads of upgrade files?
The HC UI is a way to provide provisioning VM’s without the knowledge or experience of vCenter.  And without giving an inexperienced administrator access to vCenter, where they might make a mistake.

Upgrades are as easy as running the One View Instant On and adding the new node to the existing cluster.

BTW: Their claim of having “one click updates…  RIDICULOUS.
They publically state that 1-click is ‘metaphorical’ and means ‘easy’. Their upgrades and many other tasks they call ‘1-click’ are actually multiple clicks.

HPE’s OS and firmware updates are through the HC UI. And are simple to complete.

HPE HC products offer a variety of support levels, including having HPE support services manage the entire solution remotely and provide call home features to mitigate risk of workload downtime.

The UI is intuitive for a novice to use and it can be demonstrated and used at .

With HPE Cloud Optimizer, administrators can get real time analytics from their environments; physical, virtual and cloud based. 

Application & Data Mobility
1- click hypervisor conversion

Cross-hypervisor DR

Backup to public Cloud
While we applaud our customers who have chosen to use our ProLiant servers, we know that things change in the data center.  This is why we have built the HC380 on top of an x86 and software defined storage platforms. Because the VSA is hardware agnostic and can run on vSphere, Hyper V or KVM, customers don’t have a vendor ‘lock in’ for hardware or hypervisor.  So there is NO NEED to have a hypervisor conversion tool.

We also have Cloud Optimizer, VM Explorer, Cloud System 10, Azure, and Eucalyptus to provide cloud management, workload shifting and backups to a Cloud provider.

Enterprise/Business  Critical
Application Support
Any app at any scale
SAP certification
Exchange at scale RA
SQL at scale RA
Oracle, Splunk, Epic, etc.
H PE does not reflect any HC certifications or RAs for enterprise apps. All of these workloads are arc based on legacy storage.
Not really sure why Nutanix believes that HPE HC solutions have no reference architectures for enterprise apps.

HPE has plenty of reference architectures / configurations for enterprise applications; here are links to a few of those:

Our DL380 is also certified to run SAP applications and to host applications like EPIC, Oracle and SQL.

BTW- NTNX is being deceitful with SAP, because they are certified ONLY for SAP NetWeaver, not any other SAP application.

Over 100 eco-system partners
Over 50 eco-system partners supporting AHV
 None specific to HPE hyper convergence
Sure we have Eco-system partners… we have been in business a long time. And we have strategic alliances with several vendors.  HPE has a long standing alliances and partnerships with Microsoft, VMware, Citrix, Veeam, Docker and several others in the hyper converged space.

Our Distribution & Channel programs are allowing our customers to take advantage of HPE’s global reach including consulting and professional services by HPE employees and certified partners.

Our competitors like to say that they have a global presence, but in fact…they hire 3rd parties to deliver their implementation services outside of the US.

See how everything Nutanix docs works at
Want to see what HPE is up to?  Just visit the HPE website at or you contact one of our 1000+ channel partners who can provide you with a business solutions & services that will be tailored to meet your business challenges.

No comments:

Post a Comment