Storage arrays and hardware
Storage arrays and hardware can be delivered with various storage, transport/link protocols. When building a fabric consisting of the dual AVFS Head and storage arrays and or hardware, the requirement is that AVFS must be able to communicate with these to become aware of their LUN’s.
See the table for the block based storage and transport/link protocols. AVFS is HA for the metadata. For high availability for data paths multi path support is included in the AVFS driver. Storage arrays from Infortrend, Seagate and the big four are real HA due to their write cache synchronisation. These can be connected via Multi path.
For HPC clusters without the need for high availability commodity hardware with raid cards for spindle access or SATA SSD access can be used. For HPC application with NVME SSD’s a raid card is not fast enough and Software Defined Solutions for example from Raidix, Excelero (Mesh) and others are an option. The combinations with their different protocols can be integrated.
AVFS/iSCSI/Ethernet
This protocol combination is available up to 200 Gb/s bandwidth
AVFS/FC
Fibrechannel does not need an explanation
AVFS/iSER/RoCEv1
iSER over RoCEv1 explained
AVFS/iSER/InfiniBand
iSER over InfiniBand explained
AVFS/NVME-oF/RoCEv1
NVME-oF over RoCEv1 explained
AVFS/NVME-oF/InfiniBand
NVME-oF over InfiniBand explained
Standard & extreme performance integrated
Not all servers, workstations or desktops may need extreme performance
East-West Communication/data transfer
About data tiering and SSD caching