Is your Data Center really ready for the future?

While developing projects, it is important to cover key needs taking into account the required preparation for the future, considering the major trends in data solutions for Data Center. Under the perspective of Data Solutions (Networking), there are three major considerations to keep in mind when it comes to the future of Data Centers: Network Fabric, Defined Networking (SDN) and Network Functions Virtualization (NFV) software. With these essential elements, the changes that occur in the IT world will be received in a better way.

IS YOUR DATA CENTER REALLY PREPARED FOR THE FUTURE?

It is true that one project (no matter which) must be dimensioned to meet current and future needs, it also appears that it is subject to a budget and this could limit or not the achievable readiness towards those future needs.

From the current situation and considering the most important trends in data solutions for Data Center around us today, let us make a brief review of the considerations to keep in the future of a Data Center, but from the perspective of data Solutions (Networking):

- Network Fabric

- Software Defined Networking (SDN)

- Network Functions Virtualization (NFV)

NETWORK FABRIC:

At the beginning of Ethernet networks, when the communications model was due to a "client-server" structure, various topologies flourished which ultimately resulted in the well-known tree topology. This was very appropriate when modular switch ports’ capacity was running out to cover traffic between the Mainframe (and its storage system) and users who consulted or accessed information from the periphery. Today we must wonder: does my network have this "North-South" (client-server) traffic pattern?

In the vast majority of current Data Center the answer is NO. Moreover, not only the traffic has changed, becoming up to 75% "East-West", but also the inherited topology proves to be ineffective or even an obstacle for itself, for traffic among applications, servers, or for interactions that happen among them to extract/generate reports -required by different business units. We live immersed in Virtualization (synonymous with the change in the server physical location), and we have needs such as "Big Data", which also directly and strongly affects the behavior of data traffic within the Data Center. As a logical result, the network topology must change to simplify and optimize the network; some manufacturers have walked this path for years, while others ... for others is just the beginning.

There are different ways to form the FABRIC: SPB based, TRILL based or proprietary communication based (as internal communication has always been at every modular switch).

There are also different solutions depending on the FABRIC size: Stacking based approaches, MC-LAG, Virtual Chassis, L2-Fabric, L3-Fabric, SPINE&LEAF, which are the most common. For a better understanding, we will explain here a FABRIC solution that adheres faithfully to reproducing the modular switch.

The "NETWORK FABRIC" or SINGLE layer Data Center is simply the action of physically disperse within the Data Center the three basic elements/components of a modular switch. This also applies in cases where the Data Center is dispersed in one or more floors, or in one or more buildings. Such basic elements are:

ELEMENTOS DEL SWITCH MODULAR

NETWORK FABRIC MODULES

Network cards

FABRIC nodes as ToR switches.

Chasis o “back plane”

Fiber interconnection modules (40/100 Gbps)

Processors or “Routing Engine”

Administrator module that controls traffic between nodes

 

Grafica de SW Modular vrs FABRIC Distribuido.pngUn FABRIC de red es como un switch modular “distribuido”.

The result is an equivalent solution to a modular switch; with a capacity of thousands of 10/01/40 Gbps ports into a single logical drive: a physically dispersed switch in the Data Center that eliminates outdated layers that add traffic, delays, jumps, and complexity to the network.

This type of ONE-LAYER topologies have many advantages and greatly simplifies the network structure, not only within the Data Center:

- Free Spanning Tree topology (allowing links and redundant equipment "active-active").

- A single logical unit for the entire network.

- Thousands of ports administered/configured as a switch.

- A single jump/delay between terminals (end-points) from the network.

- Equidistance, links and redundant equipment with predictable BW between all nodes and terminals.

- Easy to add any service to the network: Routing, VPN, FW, L4-L7 services, segmentation, etc.

Have you ever thought about the impact of implementing a data solution for the Data Center with legacy topologies? Well, let us look at it from the other end: using new topologies (such as FABRIC) stands for: diminishment of CAPEX and OPEX, reduction of operational complexity, increment flexibility and ease to implement virtualization or to interconnect several Data Centers.

SDN:   Software Defined Networking

Just like legacy network topologies are being used -less efficiently within the Data Center (Multi Layers or tree topology) - legacy network equipment is still being used. Equipment where the internal architecture is monolithic (HW and SW are one piece only, but if one fails, the other will too), which have no separation between the data plane (input/output ports) and the equipment control plane (box intelligence, route calculations, pure processing).

In contrast, there is internal modular architecture equipment (commercial from the 90s) that offer stability, fault isolation, redundancy. The problem is that only a few manufacturers have separation of planes in 100% of their equipment. Others offer only for some of their equipment, which makes it impossible to obtain a homogeneous solution across the options portfolio that these manufacturers have, affecting the user and forcing him to mix equipments "with" and "without" plane separation.

Now, why is it important to have computers with separation of data and control planes? Consider a simple example:

- If there is a fault with a "monolithic" equipment, let us say that there are problems with the process that handles ICMP (or a DoS attack with a "ping of death"). What will happen to this equipment is that it "will get stuck" and it will need to be restarted (turning it off and on) if the equipment cannot be accessed neither by the console port, nor by IP, nor by HTML ... "it is frozen".

- If the same fault occurs on a modular architecture equipment, this could be accessed by another protocol, console port, serial, or otherwise. The reason is that the modularity of the software allows to separate/isolate faults and also allows the technical support team to restart the ICMP or TCP module only... or L3 services (routing) ... or TELNET module, etc. The network equipment is no longer a single failure point by itself! Furthermore, even with the failure or total collapse of the equipment software, the packet processing is not modified/affected because the data plane has a copy of the routing tables that were calculated in such control plane: those are "independent machines" which processes the SW vis-à-vis the processing packages (HW from network ports).

To operate, SDN starts on the fundamental principle that networking equipment has separate control and data planes: the "brain” of the computer is decoupled from the data ports of the equipment; they are independent machines.

SDN precisely leverages this independence to move the control plane to a network centralized point, and adds additional benefits such as the "programmability" from the network control plane. In simple words and to sum up, when using a controller SDN, an equivalent to what we have with server virtualization is offered to the data networks, for instance:

- Automating programming/configuration tasks of network equipment.

- Reduction of the network "implementation time" up to minutes (vrs weeks as it is today).

- Now the support staff only must "say what they want, not how to configure it".

- Complexity is transferred to the controller manufacturer, not the local IT staff.

- SDN is Multi Brand (OpenFlow); the Data Center is not tied to a single network equipment brand.

Graphic Representation of a SDN Solution.

NFV:   Network Functions Virtualization

NFV is a complementary tendency to SDN, and the concept behind it is to virtualize the "network applications" as a "SW + Licenses" solution; HW free. Both SDN and NFV look for network simplifying and accelerating deployment times. SDN born in the data center while NFV born into the world of operators service, and it is for this that we could easily believe that NFV does not apply to the business world, when the truth is that it is only a matter of scale.

The need behind NFV is to reduce the required times for such network applications’ implementation, times that are associated with specific/proprietary HWs provisioning. In contrast, enabling regular servers is very expeditious and we can almost say that virtualization is a standard for quite some time now in many Data Center (at the servers level).

Now, whether we speak of the Business environment or "Carrier", we can see the benefits NFV offers to the point in where we have to enable a new network application, such as Firewall, VPN, UTM Solutions, APs Controllers, IP Telephony, among others. If instead of using proprietary or specific HW servers, we use market industrial servers, the implementation of such network application is reduced to virtualize the required server and buy the SW along with the licensing that will be implemented on the Site. We avoid the "boxes transport" times and the “physical installation" processes: validate Rack space, availability of power, validate cooling capacity, new wiring, etc.

View over NFV submitted by the original applicant group:

SDN and OpenFlow World Congress / Darmstadt-Germany / Oct-2012

In the end, current trends have in common reduction of network complexity and the implementation times. Each of these three trends offers features that may represent a competitive advantage for the company/institution.

You may need to consider the three in the design, or just one of them, but what is a fact is that we are in a great changes period in the world of IT, with marked evolution in the way they design, operate and manage data networks.

That is why companies cannot afford to continue designing the Data Center in the traditional way; they should be prepared for the future. At ANIXTER we can holistically help with the conceptual design including physical infrastructure, data networking infrastructure, Power & Cooling solutions, physical security and many other sub-systems. Find your ANIXTER representative, contact us and we will work with you to help you and to give you more information and details on the topics reviewed in this article.

  José Leyva
  DataCenter
  Business Developer
  Anixter Inc. - CALA Region
  www.anixter.com