Einblick in einen Trainingscamp-Tag des Team NetApp-Endura Rennradfahrers Michael Schwarzmann

 
2.1 (11)

Benutzerkommentare

11 reviews

 
(2)
 
(1)
 
(2)
 
(1)
 
(5)
Gesamtwertung 
 
2.1  (11)
Zur Übersicht
Already have an account?
Bewertung (je höher desto besser)
Gesamtwertung
Kommentare
11 Ergebnisse - zeige 1 - 10  
1 2  
Reihenfolge 
 
Rocky
Gesamtwertung 
 
5.0

What lirbiateng knowledge. Give me liberty or give me death.

War dieser Kommentar hilfreich für Sie? 
Rocky
Gesamtwertung 
 
5.0

What lirbiateng knowledge. Give me liberty or give me death.

War dieser Kommentar hilfreich für Sie? 
Emmy
Gesamtwertung 
 
1.5

Hey would you mind letting me know which web host yor;#8217&ue uiznitilg? I’ve loaded your blog in 3 different internet browsers and I must say this blog loads a lot faster then most. Can you suggest a good internet hosting provider at a fair price? Thanks a lot, I appreciate it!

War dieser Kommentar hilfreich für Sie? 
Tylerwal
Gesamtwertung 
 
0.5

http://KJr439ucaNdjhbf.com
http://KJr439ucaNdjhbf.com
http://KJr439ucaNdjhbf.com - http://KJr439ucaNdjhbf.com

War dieser Kommentar hilfreich für Sie? 
Marie
Gesamtwertung 
 
0.5

Ironically I don't think NFS vs VMFS (FC, FCoE, iSCSI) is an all or nothing dicsussion. Instead I believe a unified storage platform which offers all protocols is the best fit for VMware as he hypervisor is also natively multiprotocol.At the end of the day VMs are files and the fiels must be accessed by multiple hosts. So how can one achieve this access To date VMFS is probably the most successful clustered file system released. It allows LUNs, which natively cannot provide concurrent access by multiple hosts, to be shared. It provides this functionality on any array and any type of drive. It is a true storage abstraction layer.NFS is a network file system. Storage on the network. Simple and easy.VMFS requires file locking and SCSI reservations to be handled by the host. It also runs into LUN related issues like queue depths (found in traditional arrays).NFS has file locking but it is handled by the NFS server/array. LUN queues also do not occur between the host and storage.At this point it should seem like VMFS and NFS are very similar, and that is correct.I'd like to suggest that VMs can typically be classified into one of two groups which are infrastructure VMs and high performance VMs. The two groups have an 80/20 in terms of number of VMs.Infrastructure VMs are addressed in consolidation efforts and use shared datastores.High performance, business critical VMs can be more demanding and are stored in isolated datastores (one which only stores the data for a single VM).With shared datastores VMFS can hit artificial performance scaling limits related to shallow LUN queues and can hit max cluster limits due to file locks and SCSI reservations. The later will be addressed in the next update of vSphere.With shared datastores NFS really shines as the array manages the queues, locks, etc So performance and cluster scaling is exceptional.With high performance VMs VMFS and NFS are on par as long as the link speed is not the bottleneck (ala 1GbE with NFS). However, some applications require a LUN in order to function or receive support.If one wants to leverage storage virtualization one needs to understand that with VMFS the virtualization happens at the LUN level and is not visible within vCenter. With NFS, storage virtualization is at the VM level and is observable within vCenter. As an example when one deduplicates a VMFS and a NFS datastore, the savings are identical; however the VMFS datastore is unchanged while the NFS datastore returns capacity to use for additional VMs.I guarantee you if you have a large number of VMs to address (say 500 or more) leveraging both protocols is the best.VMDKs on VMFS have more options thin, thick, eager-zero thick. So the question is when to use, why use, and what do I have to do to change the mode? With NFS VMDK types are controlled by the NFS server/array. With NetApp all VMDKs are thin and provide support for cluster services like FT (w/o being eager zero thick).Its late, and I'm not sure I've been as clear as I'd like in this dicsussion; however, my view is a multiprotocol storage array is the best fit for a multiprotocol hypervisor.If you only have VMFS, everything will work great and you may have to manage a few more datastores for the consolidated VMs. If you only have NFS, then you will run into some grey areas around LUN only solutions.(note this last point isn't really an issue as NFS is Ethernet, and Ethernet also simultaneously offers iSCSI and FCoE so there is not limit).

War dieser Kommentar hilfreich für Sie? 
Majouda
Gesamtwertung 
 
4.0

В защиту HP P2000 G3 должен сказать, что: 1. В настройка пользователя P2000G3 есть выбор Base Preference Select the base for entry and display of sograte-space sizes. In base 2, sizes are shown as powers of 2, using 1024 as a divisor for each magnitude. In base 10, sizes are shown as powers of 10, using 1000 as a divisor for each magnitude. 2. В тех же настройках есть выбор Precision Preference и Unit Preference , отвечающих за кол-во знаков после запятой, и выбор, какими размерностями считать, соответственно. 3. Да и вообще, P2000 это не совсем HP. http://scdhhce.com [url=http://jiudwc.com]jiudwc[/url] [link=http://goorpydf.com]goorpydf[/link]

War dieser Kommentar hilfreich für Sie? 
Mar
Gesamtwertung 
 
2.5

Nice article Kenneth. I eorcuntened something similar once but they had a 3.5 environment. I put an DR in the datastore name for replicated volumes and a SA for non replicated volumes. More ways to Rome

War dieser Kommentar hilfreich für Sie? 
Danielcah
Gesamtwertung 
 
0.5

Hi one of my friends, Your Astella 8 *.

War dieser Kommentar hilfreich für Sie? 
Samiullah
Gesamtwertung 
 
0.5

Ironically I don't think NFS vs VMFS (FC, FCoE, iSCSI) is an all or nothing disisuscon. Instead I believe a unified storage platform which offers all protocols is the best fit for VMware as he hypervisor is also natively multiprotocol.At the end of the day VMs are files and the fiels must be accessed by multiple hosts. So how can one achieve this access To date VMFS is probably the most successful clustered file system released. It allows LUNs, which natively cannot provide concurrent access by multiple hosts, to be shared. It provides this functionality on any array and any type of drive. It is a true storage abstraction layer.NFS is a network file system. Storage on the network. Simple and easy.VMFS requires file locking and SCSI reservations to be handled by the host. It also runs into LUN related issues like queue depths (found in traditional arrays).NFS has file locking but it is handled by the NFS server/array. LUN queues also do not occur between the host and storage.At this point it should seem like VMFS and NFS are very similar, and that is correct.I'd like to suggest that VMs can typically be classified into one of two groups which are infrastructure VMs and high performance VMs. The two groups have an 80/20 in terms of number of VMs.Infrastructure VMs are addressed in consolidation efforts and use shared datastores.High performance, business critical VMs can be more demanding and are stored in isolated datastores (one which only stores the data for a single VM).With shared datastores VMFS can hit artificial performance scaling limits related to shallow LUN queues and can hit max cluster limits due to file locks and SCSI reservations. The later will be addressed in the next update of vSphere.With shared datastores NFS really shines as the array manages the queues, locks, etc So performance and cluster scaling is exceptional.With high performance VMs VMFS and NFS are on par as long as the link speed is not the bottleneck (ala 1GbE with NFS). However, some applications require a LUN in order to function or receive support.If one wants to leverage storage virtualization one needs to understand that with VMFS the virtualization happens at the LUN level and is not visible within vCenter. With NFS, storage virtualization is at the VM level and is observable within vCenter. As an example when one deduplicates a VMFS and a NFS datastore, the savings are identical; however the VMFS datastore is unchanged while the NFS datastore returns capacity to use for additional VMs.I guarantee you if you have a large number of VMs to address (say 500 or more) leveraging both protocols is the best.VMDKs on VMFS have more options thin, thick, eager-zero thick. So the question is when to use, why use, and what do I have to do to change the mode? With NFS VMDK types are controlled by the NFS server/array. With NetApp all VMDKs are thin and provide support for cluster services like FT (w/o being eager zero thick).Its late, and I'm not sure I've been as clear as I'd like in this disisuscon; however, my view is a multiprotocol storage array is the best fit for a multiprotocol hypervisor.If you only have VMFS, everything will work great and you may have to manage a few more datastores for the consolidated VMs. If you only have NFS, then you will run into some grey areas around LUN only solutions.(note this last point isn't really an issue as NFS is Ethernet, and Ethernet also simultaneously offers iSCSI and FCoE so there is not limit). http://mnmhyder.com [url=http://efoazblndg.com]efoazblndg[/url] [link=http://sjskwpqvms.com]sjskwpqvms[/link]

War dieser Kommentar hilfreich für Sie? 
Sahidin
Gesamtwertung 
 
0.5

When dealing with VMware be sure to adsujt the MTU on both the vSwitch and VMkernel interface. Changing only the vSwitch is similar to changing the physical switch MTU but not the hosts, it won't hurt anything but the ESXi hosts won't send packets larger than 1500.I'm a proponent of enabling jumbo frames, but I've also seen a lot of environments with issues due to inconsistent configuration. For example, jumbo frames enable on only some of the hosts. In my experience this results in occasional hangs in vMotion or NFS datastores going offline when accessed. Just something to watch out for.

War dieser Kommentar hilfreich für Sie? 
11 Ergebnisse - zeige 1 - 10  
1 2