SANTA CLARA, Calif. -- NFV & the Data Center -- NFV requires operators to find new ways of looking at basic values like performance, reliability and security.
Performance metrics will change in migrating to NFV, from raw performance to performance per cubic meter, or performance per watt. "It's not just about providing raw performance. We wouldn't be going to generic hardware processing if all we were concerned about is maximizing raw performance," Marc Cohen, chairman of the Open Networking Foundation market education committee, said on a panel here.
Performance metrics are just the beginning of the changes. Virtualization will transform many ways of looking at and managing the network.
Virtualization will drive new ways of using locations. Cities will have multiple data centers, to provide capabilities such redundant failover, Wind River Systems Inc. CTO Gareth Noyes said.
Data centers will come in all sizes, he said. "You'll get mega ones but you'll have a ton of closet-sized data centers as well."
Networks need to be able to use idle resources, which requires being able to discover them, Samir Sharma, solutions architect, Netcracker Technology Corp. , said.
Likewise, security requirements are changing. Security is a process, not a result -- you don't build a product that's secure forever, but instead the product evolves to meet changing threats. That's where open source can help by having many eyes to find vulnerabilities, Noyes said.
Security is an issue for both open and proprietary platforms. "It seems like even the close platforms are insecure," Steve Shaw, product marketing director, SDN, for Juniper Networks Inc. (NYSE: JNPR), said, citing network security threats cropping up recently. Operators are concerned that the cloud won't provide transparency into who is accessing the network and whether those users are protected.
NFV and SDN (which are rarely discussed separately) provide the potential for improved security, to allow policy-based decisions rather than focusing on hardware, Cohn said.
Security "is a scarlet letter that open source and the open platform have," Sharma said. But open systems benefit from the same security processes -- and are vulnerable to the same attacks -- as closed systems. Much of the fear of open systems is really just fear of the unknown. "You're still going to do everything you did before for security, if not more," he said.
Reliability considerations change because hardware "is factored out of the equation," Cohn said. That's how hyperscale operators work -- they design data centers taking hardware failures for granted. They add hardware for scalability rather than reliability.
The "monkey concept" becomes the test for network availability -- the network needs to continue to operate reliably even if a monkey goes through the data center unplugging equipment wantonly and at random, Sharma said.
Separating software from hardware will require new ways to scale and configure networks. You don't want to have the same connectivity problems in the virtual world as in the physical world. "You want the services to flow. We're in the early stages of that with NFV," Shaw said.
Heavy Reading analyst Roz Roseboro agreed. "If all we do is take something that we do on a physical platform and replicate it in software, we miss an opportunity to re-architect," she said.
Software needs to be based on "micro-services" where "the system can dynamically change based on requests coming in, versus a monolithic version that had to do everything in one bucket," Sharma.
The new network needs require openness, Cohn said. "Openness means a single vendor doesn't control everything." That's different from just having an open interface where a single vendor retains control.
Rather than vertical purchasing, RFPs are becoming horizontal, Shaw said. Instead of buying the whole stack from one vendor, operators build a platform and let services vendors, such as evolved packet core (EPC) and IP Multimedia Subsystem (IMS) providers, compete for the business on top of the platform.
Noyes added that changed purchasing radically disrupts the supply chain, which is part of why virtualization deployments are moving slowly, with active trials today and field deployments anticipated for 2016.