Learn what happens to the SAN
In any company data is typically moved over a SAN (storage area network) either via Fibre Channel or iSCSI and this has been how businesses have managed data for the past 15 years. Today we have 32Gb FC and soon 100Gb iSCSI so there is room for growth.
At present there is a huge drive to adopt all flash arrays to deliver 100,000’s IOPS. Based on the table above with an SSD performance of 350MB/s write speed and 500MB/s read speed the table shows how many drives it would take to saturate the network, not that many!
Below is a table showing the available bandwidth for each type of connectivity.
Adopting the newer NVMe all flash arrays, deliver 5x the performance of an SSD and this will have an even more dramatic effect on the network. Connecting storage arrays, switches and servers isn’t an easy thing to do:
And so, the cycle continues more storage, servers and network ports.
Up until now this has worked very well but complexities in managing pools of storage, servers and switches (each with a different management set) is becoming problematic and expensive to run. For years it was the SAN that had the bandwidth to handle multiple storage streams until now. Very soon there will be a new storage technology emerging 3D XPoint memory this is 1,000 times faster than NVMe and the networks won’t be able to cope, even the terabit networks that are on the horizon.
Imagine having a storage array populated with ten 3D XPoint memory modules the performance would be 17.5TB/s!
The end of an era for the SAN
Building a datacentre is a complex and difficult task to achieve. Forgetting about the building, electricity and safety systems, the business needs to engage with different partners and manufacturers to deliver the right feature set the business needs and desires.
- SAN Storage Systems – (DELL/EMC, HPE, Fujitsu, LENOVO, IBM, NetApp, Hitachi, AWS, Azure etc)
- Network (iSCSI/FC) – (D-Link, Netgear, Cisco, Extreme Networks, HPE, Juniper, Huawei, Arista, DELL/EMC etc)
- Servers – (DELL/EMC, HPE, Fujitsu, LENOVO, Cisco etc)
- Operating System – Windows 20xx, Linux, Unix
- Hypervisor – VMware, Citrix, Hyper-V, KVM …
- Software applications – databases, data management, financial, CRM, ERP, CAD etc
This equates to lots of meetings, software and hardware demonstrations, proposals, SLA’s, time-frames, training, OPEX vs CAPEX and the initial and on-going financial costs.
Even before a decision is made, the cost of this journey is high, tying up directors, managers and staffs time and resources.
The complex licensing, on-going handling of multiple support and maintenance contracts is a time consuming process and when after 5-7 years of use, the cycle starts again.
Whilst the business will eventually embark on this journey and the results should speak for themselves, every few years a technology comes along to shake things up and a “hyper-converged infrastructure” is this technology.
The future of data movement
Clearly a new way of providing data to applications must be found. Whilst a SAN can support thousands of storage devices, it isn’t ideal for providing ultra-high-speed data access to applications in the future.
The next step is software defined everything:
- SDSW – Software Defined Software (Hypervisors, VM’s, Applications)
- SDN – Software Defined Networking
- SDS – Software Defined Storage
This might sound crazy but having all your applications, servers, storage and networking as a compute node all controlled through a GUI makes sense. In the 90’s the OS, application and storage resided on stand-alone servers, when virtualisation came along things changed as this provided consolidation and better resource utilisation, therefore everything became distributed.
Now with multi-processor, multi-core and terabytes of memory, 24x 2.5” drives, NVMe support all housed a 2U server chassis, you can run a fast and highly scalable SAN that can evolve when newer and higher performing storage modules and memory become available.
Each node provides complete protection against storage, networking, server, OS or application failure and the data is protected by distributing the blocks throughout the nodes. Disk rebuilds are a thing of the past, replace the failed drive and let the system take care of the rest.
As the cluster is not just for storage but your complete IT infrastructure, the network connects to your edge switches whilst at the same time protecting your cluster from external attack.
Adding more compute nodes increases performance and reliability as the workloads are distributed throughout the cluster.
Why will this change the way we build SAN’s?
Firstly, we need to go back in time to see how over the past few decades computing has changed.
- During the 1970’s the world was using mainframe computers and terminals.
- The 1980’s provided companies with mini-computers DEC, DG, Unisys etc.
- The 1990’s brought in the age of standardised operating system from Novell or Microsoft and processing power from Intel, the server was born.
- In early 2000 the SAN and virtualisation were first introduced with FC-AL 1Gb/s.
- From 2010 until now, the era of multi-processor consolidated servers, highly scalable networks, virtualisation software has been hugely successful and flash storage has started to replace the humble hard disk.
- 2020 and beyond, the demise of the SAN and rise of hyper-converged infrastructures.
We deploy SAN’s because that’s how it’s been done for the past 18 years and it works. However, as storage technology is increasingly providing greater storage capacities, huge performance as well as providing a lower price point every so often there are seismic shifts in the way computing and infrastructures changes and this is one of them.
If you would like to know more about how we can assist in helping you design a future proof datacentre, call us on +44 (01256) 331614 or email us on solutions@fortunadata.com
Thanks for reading.