Bewertungsdetails

Samiullah
Gesamtwertung 
 
0.5

Ironically I don't think NFS vs VMFS (FC, FCoE, iSCSI) is an all or nothing disisuscon. Instead I believe a unified storage platform which offers all protocols is the best fit for VMware as he hypervisor is also natively multiprotocol.At the end of the day VMs are files and the fiels must be accessed by multiple hosts. So how can one achieve this access To date VMFS is probably the most successful clustered file system released. It allows LUNs, which natively cannot provide concurrent access by multiple hosts, to be shared. It provides this functionality on any array and any type of drive. It is a true storage abstraction layer.NFS is a network file system. Storage on the network. Simple and easy.VMFS requires file locking and SCSI reservations to be handled by the host. It also runs into LUN related issues like queue depths (found in traditional arrays).NFS has file locking but it is handled by the NFS server/array. LUN queues also do not occur between the host and storage.At this point it should seem like VMFS and NFS are very similar, and that is correct.I'd like to suggest that VMs can typically be classified into one of two groups which are infrastructure VMs and high performance VMs. The two groups have an 80/20 in terms of number of VMs.Infrastructure VMs are addressed in consolidation efforts and use shared datastores.High performance, business critical VMs can be more demanding and are stored in isolated datastores (one which only stores the data for a single VM).With shared datastores VMFS can hit artificial performance scaling limits related to shallow LUN queues and can hit max cluster limits due to file locks and SCSI reservations. The later will be addressed in the next update of vSphere.With shared datastores NFS really shines as the array manages the queues, locks, etc So performance and cluster scaling is exceptional.With high performance VMs VMFS and NFS are on par as long as the link speed is not the bottleneck (ala 1GbE with NFS). However, some applications require a LUN in order to function or receive support.If one wants to leverage storage virtualization one needs to understand that with VMFS the virtualization happens at the LUN level and is not visible within vCenter. With NFS, storage virtualization is at the VM level and is observable within vCenter. As an example when one deduplicates a VMFS and a NFS datastore, the savings are identical; however the VMFS datastore is unchanged while the NFS datastore returns capacity to use for additional VMs.I guarantee you if you have a large number of VMs to address (say 500 or more) leveraging both protocols is the best.VMDKs on VMFS have more options thin, thick, eager-zero thick. So the question is when to use, why use, and what do I have to do to change the mode? With NFS VMDK types are controlled by the NFS server/array. With NetApp all VMDKs are thin and provide support for cluster services like FT (w/o being eager zero thick).Its late, and I'm not sure I've been as clear as I'd like in this disisuscon; however, my view is a multiprotocol storage array is the best fit for a multiprotocol hypervisor.If you only have VMFS, everything will work great and you may have to manage a few more datastores for the consolidated VMs. If you only have NFS, then you will run into some grey areas around LUN only solutions.(note this last point isn't really an issue as NFS is Ethernet, and Ethernet also simultaneously offers iSCSI and FCoE so there is not limit). http://mnmhyder.com [url=http://efoazblndg.com]efoazblndg[/url] [link=http://sjskwpqvms.com]sjskwpqvms[/link]

War dieser Kommentar hilfreich für Sie? 

Kommentare

 
 
Reihenfolge 
 
Already have an account? or Create an account