I ’ ve been involved with VMware since 2005, and every year they show and implement a new technology vision, and customers are still giving case studies. VMware is also seeing fewer and more new people meet at events with people they've known since their inception.
This was especially the case on VMware Cloud Native Day , and at this event, there were young and enthusiastic people engaged in demos and talks, and it reminded me of the enthusiasm when the VMware Infrastructure OEM was launched.
When the VMware OEM was first launched, our customers had never been exposed to virtualization, and in order to accurately convey what was and wasn’t going to change from the current system as well as what to take note of, a technical manager from VMware gave a thorough explanation of the operating principles behind its features rather than an explanation of the features, which resulted in him being troubled with a barrage of questions.
Since I've been doing this for quite a while now, As vEXPERT's mission, I began to focus on educating the younger generation, and I've talked to an old friend of mine who was a principal engineer at VMware to give a lecture on the vSphere Performance at my Gikon-juku(meaning a private school to learn the soul of the engineer). As a vendor that delivers virtualization to our customers, our job is to understand the characteristics and implementation of functions that are not explicitly described in the manual, to have a system perspective that only Fujitsu can have, to nip potential problems in the bud, and to help design appropriate functions and configurations. These things give us pride as engineers.
And with this idea of passing these ideas on to others, I named Gikon-juku.
Cloud Native-Oriented Products Begin to Move
This time, VMware held VMware Cloud Native Day, its first Cloud-Native-specialized event, ahead of vFORUM.
The focus is set on container platforms and the cloud services that manage them. And what got my attention was the Project Pacific – Technical Dive. It left an impact on me at VMworld 2019. At the start of his presentation, the speaker, Mr. Ohisa, went out of his way to say that it wouldn’t be a deep dive, but I’d say that it was deep enough for people who operate existing IT. For the basic operation of Kubernetes (K8s), which will be implemented in Project Pacific, there were hurdles to be overcome starting from a discussion aimed at container operators who work with operations other than vCenter, such as explaining the mechanism for executing items defined in YAML files and building environments with dedicated Kubectl commands. Not only did he talk about the fact that NSX-T is required for network transfers and the fact that the K8s master VM requires a structure of 3 VMs, he also told us that the reason that the ESXi implementation of the K8s is 8% faster than bare-metal Linux was due to the overwhelming power of the NUMA (Note 1) scheduler for ESXi-implemented CPUs, as well as tons more information that’s all currently available. As someone who has a healthy curiosity, I really liked this presentation style.
- (Note 1)NUMA (Non-Uniform Memory Access):
One form of architecture for a shared memory multiprocessor computer system in which the cost of accessing the main memory shared by multiple processors is dependent on the memory area and the processor, and is not uniform.
What caught my eye in the exhibition area was a robot that was made up of blocks and the demonstration of this robot solving a Rubik’s Cube. It was explained that a cloud-native app was used to solve the Rubik's Cube, and Plus IoT Center was being used to deploy and manage a program that controls the robot.
I’m deeply familiar with Plus IoT Center , since at VMworld two years ago, Fujitsu provided technology that delivered incremental programming updates and demonstrated how that technology could rapidly reprogram connected cars.
At vFORUM, of course, you could say that the top messages were vSphere's integration with Kubernetes, known as "Project Pacific," and the operations management system that includes “Project Pacific,” known as " Tanzu Portfolio."
Project Pacific integrates the existing IT core virtualization hypervisor VMware vSphere with Kubernetes, the container management core of DX-IT.
It combines existing IT with DX IT.
VMware Tanzu provides a suite of Kubernetes-compatible software development assistance tools and services to help you build, run, and manage Kubernetes-compatible modern applications.
We've come to a point where running IT infrastructure alongside cloud-native applications is within reach.
But that was the message from the vendor side, so what about the customer side?
The message from customer case studies at VFORUM is that on-premises environments are utilizing vRealize Operations, vRealize Automation, NSX-T, etc., to provide private cloud operations that are thoroughly streamlined and automated. And with that extra power, the next step is to advance towards hybrid clouds, container platforms, and multi-clouds (leveraging cloud native services for DX), which really sets the standard for a cloud journey.
What about the cloud service side?
Cloud service provider environments that make up the hybrid clouds used by customers have deployed the VMware Cloud Foundation platform to allow for consistent operations using the same operations as the VMware base on-premise environments that customers use.
Multi-clouds that are made up of non-VMware environments also present a vision in which it is possible for consistent management, including on-premise, using VMware cloud services.
This was almost the same as the cloud journey best practices that I had described, so I was confident that my ideas weren’t wrong, but I was also shocked that so many customers were already moving forward with these ideas.
However, in our proposal, in addition to the “infrastructure perspective,” we tried not to forget the "system perspective." Please read the article I published on ITmedia for a view that differs from the information normally presented to increase sales. ( Link)
What the Fusion of Existing IT with DX IT Brings to IT Departments
Project Pacific is a great way to fuse existing IT with IT for DX, especially container technology, which has received the most attention in application-centric operations management. For those running conventional IT on VMware, tackling DX (cloud native applications) platforms that leverage container technology requires an entirely new level of knowledge. IT professionals operate their IT departments with a diverse set of skills that are already too much to handle, such as skills related to servers, storage, networks, virtualization, open source, PCs, operating systems, and security. It’s hard to imagine that there's any more room for them to take on a completely different set of skills. The IT departments themselves need to change the way they work.
To this end, existing IT systems need to be examined on a system-by-system basis and unnecessary systems should be discarded. What remains must then be thoroughly streamlined and automated to create room for new technologies to engage with.
Fujitsu provides a mechanism to utilize existing IT with DX as is and a mechanism to improve operational efficiency. And many people might then ask, “Is it possible to utilize existing IT with DX as is?” Our approach is not to modernize (reform) existing IT, but to allow for even DX to leverage mainframe data that is also representative of existing IT via middleware. View the link for more details. Japanese site, so please use a translation site if necessary.
Additionally, to streamline existing IT operations, we also offer HCI to simplify infrastructure operations.
At this year’s vFORUM, I mostly listened to customer case studies, but as I mentioned earlier, those who are oriented towards DX were, without exception, working on the efficiency and automation of existing IT, and then moving toward the utilization of hybrid clouds, containers, and cloud-native applications.
For case studies, there was one that used vRealize Operations to streamline and automate the existing IT environment, and used vRealize Automation along with NSX to streamline and automate system transfers. Where a VMware product alone was not enough, they incorporated Ansible Tower and used SaaS operations management, as well as a lot of other innovations. Of course, you can use Fujitsu's HCI with its long-standing OEM vRealize products as well as NSX or other components, and for Infrastructure Manager, the operations management tool built into HCI, we have long offered Ansible modules on GitHub.
It would bring me pleasure if existing mechanisms that lead to the success of such IT infrastructure operations were effectively leveraged to gain the capacity to move quickly toward DX.
Now, if you’re wondering if it’s easy to work with container platforms leveraged with DX, my response is that it isn’t that easy. The knowledge that you’ll need as a prerequisite is far from the skills used with existing IT, and you’ll need to be prepared to spend time and effort learning those technologies from scratch.
However, to address this challenge, VMware will offer "Project Pacific," which integrates the container management of Kubernetes with VMware vSphere, the core of existing IT.
This significantly lowers the operational hurdles for IT departments working on DX, with the goal of deploying the Kubernetes container environment in the same way that you would deploy virtual machines from vCenter Server, which is familiar to existing IT. If you are interested in Project Pacific, you may also read a Japanese overview that has been made publicly available on SlideShare by VMware's Mr. Tabuki.
Tanzu Mission Control, which manages this container environment, achieves operational consistency by allowing for uniform management of both on-premises Project Pacific Kubernetes and Kubernetes platforms running on major cloud service providers. I would say that these are great approaches for IT departments running existing IT.
Considering Project Pacific from a System Perspective
However, I can't say that there is nothing to worry about.
It's not so much about the differences between existing IT and DX IT as it is about how to reconcile the differences in requirements. Let me give you a specific example.
The support policy for vSphere, which has supported existing IT to date, is for five years. The product version upgrade cycle is approximately one year. The customer's first priority is "stable operation." For that reason, in some cases, there are customers who even avoid applying patches. Quite a few customers have policies that unify operations in combination with the version of the software that they themselves have validated. To address this, Fujitsu's HCI comes with a management tool called "Infrastructure Manager" that provides the ability to define an infrastructure baseline that is uniform with the version of the software that the customer has validated from firmware to hypervisors, and once defined, the tool displays and alerts you to definition-violating infrastructures for combinations that are outside of what has been defined.
That is a very attractive feature for customers who value system stability.
So, what about Kubernetes?
The upgrade cycle for Kubernetes is approximately three months. Support is generally offered from the latest version to two versions prior to that. In other words, support is offered for less than a year. And beyond the potential for features to be added, customers also have to prepare for the possibility of features being removed or the interface being changed. It appears that with Kubernetes, the stance they are taking is one where the top priority is improved efficiency and the implementation of new functions over stable operation.
This example shows that unifying systems with different orientations and characteristics is very challenging and fraught with risk. Professionals who run existing IT will have to change their operating style significantly if they are forced to upgrade vSphere frequently due to the Kubernetes stance they are not using yet. Operation without updating may become impossible. We will be required to change our mindset to an operation style based on DX.
On the other hand, we could also imagine things going in another direction. That direction being one where instead of focusing on DX, there is an emphasis on existing IT that is still the core of operations, and Kubernetes version upgrades are aligned with the existing vSphere version. In this case, the gap to various cloud services and applicable Kubernetes versions will widen, and there may also be risks if we consider how things will be affected when there is a move to hybrid and multi-clouds in the future.
We may be surprised to see a future where in order to provide support for both sides, vSphere products will be split into two support lines: long-term support and short-term support. But I feel frustrated that I can't tell you anything because I don't have any information now.
Together with everyone, I’ll keep my eyes open to see what happens to the vSphere support lifecycle when a new version of vSphere that has been integrated with Kubernetes comes out in the not-too-distant future, and whether Kubernetes version upgrades are implemented in a way that doesn't impact ESXi.
VMware Day OSAKA
I gave a lecture on November 19 at VMware Day Osaka.
The title of my talk was “Learn About the Evolution of Hyper-Converged Infrastructure.”
As a special thank you to the readers of this blog, I have made the lecture materials publicly available.
I hope you will continue to enjoy reading this blog.
Please look forward to my next post.
[Blog 仮想化の風] 第17回 VMware Cloud Native Day 2019, VMware vFORUM 2019 Tokyo, VMware Day 2019 Osakaフィードバック