Ernährung und Regeneration für Radsportler

Einblick in einen Trainingscamp-Tag des Team NetApp-Endura Rennradfahrers Michael Schwarzmann

 
2.1 (11)
Bewertung schreiben

Im Leben eines Profi-Rennradfahrers zählt vor allem eins: der Sieg! Dazu ist neben genetischer Veranlagung, reichlich Ehrgeiz und gezieltem Training  auch die richtige Ernährung ein wichtiger Erfolgsbaustein! Wir zeigen am Beispiel von NetApp-Profi Michael Schwarzmann einen Trainingstag.

Ein Schlüsselelement der Sporternährung ist das Gewichtsmanagement, denn das richtige „Wettkampfgewicht“ ist für Rennradsportler essentiell. Ein optimales Verhältnis von funktioneller Muskelmasse und Körperfett  ist entscheidend.  Viele von uns sind daher neugierig: Wie isst/trinkt und trainiert ein Rennrad-Profi in der Vorbereitungsphase?
Der 23-jährige Michael Schwarzmann, Sprinter der deutschen Radsportmannschaft Team NetApp - Endura, gewährt uns Einblick in sein Ernährungs-/Trainings-Tagebuch während des Trainingslagers auf Mallorca.



Name: Michael Schwarzmann

Team: NetApp-Endura
Beruf: Profi- Rennradfahrer
Spezialisierung: Sprinter
Georen am: 07. Januar 1991
Größe: 174 cm
Gewicht: 70 kg                       
Körperfett: 6,5 %

 

                                     

Michael Schwarzmann - Team NetApp Endura

 

Datum: 19.Januar 2014 Uhrzeit Nahrungsmittel Getränke
Frühstück 08:00 1 Schüssel Buchweizenflakes mit Mandelmilch 2 Spiegeleier 1 Brötchen mit 10g Butter und Schinken 1 Brötchen mit 10g Butter, Mandelmuss und Himbeermarmelade 2 Tassen Kaffee, schwarz
1. Trainingseinheit: 3-Stunden intensives Rennradtraining; Mix aus Sprints und Intervallen 9.30-12.30 3 Milchbrötchen mit Frischkäse und Schinken, 1 Natural Energy Cereal Sweet´n Salty Riegel 2 Trinkflaschen (à 0,5l) mit 5Electrolytes Brausetabletten
Recovery-Snack (direkt nach der Belastung) 12:30   1 Recovery Shake
Mittagessen 13:30 1 großer Teller Nudeln mit Olivenöl und Käse, dazu gebratenes Putenfilet Dessert: 2 Pfannkuchen und eine Schüssel Obstsalat (Honigmelone, Kiwi, Orange) Wasser 0,33l
2. Trainingseinheit: 2-Stunden Zeitfahrrad (GA2 und SB Intervalle); sehr hohe Intensität 15.00-17.00 1 Natural Energy Cereal Sweet´n Salty Riegel 1 Trinkflasche (à 0,5l) mit 5Electrolytes Brausetablette
Recovery-Snack (direkt nach der Belastung) 17:00 1 MuscleUp Riegel 1 Glas Wasser
Abendessen 19:30 1 gehäufter Teller Reis mit Olivenöl und gebratenem Schweinefleisch, dazu gemischter Salat 1 Vollkornbrötchen mit 10g Butter und Schinken Dessert: 1 Schüssel Obstsalat (Honigmelone, Wassermelone, Kiwi) Wasser 0,33l
Snack 21:00 15g Zartbitterschokolade Wasser 1l
 

Nicht zu vernachlässigen ist für maximale Leistung übrigens neben verschiedenen regenerativen Maßnahmen auch ausreichend Schlaf: „Bei hoch-intensiven Trainingsphasen verbringe ich meine Freizeit daher am liebsten im Bett und versuche mich zu erholen“ sagt der deutsche Profi-Fahrer Michael Schwarzmann.   

Benutzerkommentare

11 reviews

 
(2)
 
(1)
 
(2)
 
(1)
 
(5)
Gesamtwertung 
 
2.1  (11)
Zeige alle Einträge View most helpful
Already have an account?
Bewertung (je höher desto besser)
Gesamtwertung
Kommentare
Rocky
Gesamtwertung 
 
5.0

What lirbiateng knowledge. Give me liberty or give me death.

War dieser Kommentar hilfreich für Sie? 
Rocky
Gesamtwertung 
 
5.0

What lirbiateng knowledge. Give me liberty or give me death.

War dieser Kommentar hilfreich für Sie? 
Emmy
Gesamtwertung 
 
1.5

Hey would you mind letting me know which web host yor;#8217&ue uiznitilg? I’ve loaded your blog in 3 different internet browsers and I must say this blog loads a lot faster then most. Can you suggest a good internet hosting provider at a fair price? Thanks a lot, I appreciate it!

War dieser Kommentar hilfreich für Sie? 
Tylerwal
Gesamtwertung 
 
0.5

http://KJr439ucaNdjhbf.com
http://KJr439ucaNdjhbf.com
http://KJr439ucaNdjhbf.com - http://KJr439ucaNdjhbf.com

War dieser Kommentar hilfreich für Sie? 
Marie
Gesamtwertung 
 
0.5

Ironically I don't think NFS vs VMFS (FC, FCoE, iSCSI) is an all or nothing dicsussion. Instead I believe a unified storage platform which offers all protocols is the best fit for VMware as he hypervisor is also natively multiprotocol.At the end of the day VMs are files and the fiels must be accessed by multiple hosts. So how can one achieve this access To date VMFS is probably the most successful clustered file system released. It allows LUNs, which natively cannot provide concurrent access by multiple hosts, to be shared. It provides this functionality on any array and any type of drive. It is a true storage abstraction layer.NFS is a network file system. Storage on the network. Simple and easy.VMFS requires file locking and SCSI reservations to be handled by the host. It also runs into LUN related issues like queue depths (found in traditional arrays).NFS has file locking but it is handled by the NFS server/array. LUN queues also do not occur between the host and storage.At this point it should seem like VMFS and NFS are very similar, and that is correct.I'd like to suggest that VMs can typically be classified into one of two groups which are infrastructure VMs and high performance VMs. The two groups have an 80/20 in terms of number of VMs.Infrastructure VMs are addressed in consolidation efforts and use shared datastores.High performance, business critical VMs can be more demanding and are stored in isolated datastores (one which only stores the data for a single VM).With shared datastores VMFS can hit artificial performance scaling limits related to shallow LUN queues and can hit max cluster limits due to file locks and SCSI reservations. The later will be addressed in the next update of vSphere.With shared datastores NFS really shines as the array manages the queues, locks, etc So performance and cluster scaling is exceptional.With high performance VMs VMFS and NFS are on par as long as the link speed is not the bottleneck (ala 1GbE with NFS). However, some applications require a LUN in order to function or receive support.If one wants to leverage storage virtualization one needs to understand that with VMFS the virtualization happens at the LUN level and is not visible within vCenter. With NFS, storage virtualization is at the VM level and is observable within vCenter. As an example when one deduplicates a VMFS and a NFS datastore, the savings are identical; however the VMFS datastore is unchanged while the NFS datastore returns capacity to use for additional VMs.I guarantee you if you have a large number of VMs to address (say 500 or more) leveraging both protocols is the best.VMDKs on VMFS have more options thin, thick, eager-zero thick. So the question is when to use, why use, and what do I have to do to change the mode? With NFS VMDK types are controlled by the NFS server/array. With NetApp all VMDKs are thin and provide support for cluster services like FT (w/o being eager zero thick).Its late, and I'm not sure I've been as clear as I'd like in this dicsussion; however, my view is a multiprotocol storage array is the best fit for a multiprotocol hypervisor.If you only have VMFS, everything will work great and you may have to manage a few more datastores for the consolidated VMs. If you only have NFS, then you will run into some grey areas around LUN only solutions.(note this last point isn't really an issue as NFS is Ethernet, and Ethernet also simultaneously offers iSCSI and FCoE so there is not limit).

War dieser Kommentar hilfreich für Sie? 
Majouda
Gesamtwertung 
 
4.0

В защиту HP P2000 G3 должен сказать, что: 1. В настройка пользователя P2000G3 есть выбор Base Preference Select the base for entry and display of sograte-space sizes. In base 2, sizes are shown as powers of 2, using 1024 as a divisor for each magnitude. In base 10, sizes are shown as powers of 10, using 1000 as a divisor for each magnitude. 2. В тех же настройках есть выбор Precision Preference и Unit Preference , отвечающих за кол-во знаков после запятой, и выбор, какими размерностями считать, соответственно. 3. Да и вообще, P2000 это не совсем HP. http://scdhhce.com [url=http://jiudwc.com]jiudwc[/url] [link=http://goorpydf.com]goorpydf[/link]

War dieser Kommentar hilfreich für Sie? 
Mar
Gesamtwertung 
 
2.5

Nice article Kenneth. I eorcuntened something similar once but they had a 3.5 environment. I put an DR in the datastore name for replicated volumes and a SA for non replicated volumes. More ways to Rome

War dieser Kommentar hilfreich für Sie? 
Danielcah
Gesamtwertung 
 
0.5

Hi one of my friends, Your Astella 8 *.

War dieser Kommentar hilfreich für Sie? 
Samiullah
Gesamtwertung 
 
0.5

Ironically I don't think NFS vs VMFS (FC, FCoE, iSCSI) is an all or nothing disisuscon. Instead I believe a unified storage platform which offers all protocols is the best fit for VMware as he hypervisor is also natively multiprotocol.At the end of the day VMs are files and the fiels must be accessed by multiple hosts. So how can one achieve this access To date VMFS is probably the most successful clustered file system released. It allows LUNs, which natively cannot provide concurrent access by multiple hosts, to be shared. It provides this functionality on any array and any type of drive. It is a true storage abstraction layer.NFS is a network file system. Storage on the network. Simple and easy.VMFS requires file locking and SCSI reservations to be handled by the host. It also runs into LUN related issues like queue depths (found in traditional arrays).NFS has file locking but it is handled by the NFS server/array. LUN queues also do not occur between the host and storage.At this point it should seem like VMFS and NFS are very similar, and that is correct.I'd like to suggest that VMs can typically be classified into one of two groups which are infrastructure VMs and high performance VMs. The two groups have an 80/20 in terms of number of VMs.Infrastructure VMs are addressed in consolidation efforts and use shared datastores.High performance, business critical VMs can be more demanding and are stored in isolated datastores (one which only stores the data for a single VM).With shared datastores VMFS can hit artificial performance scaling limits related to shallow LUN queues and can hit max cluster limits due to file locks and SCSI reservations. The later will be addressed in the next update of vSphere.With shared datastores NFS really shines as the array manages the queues, locks, etc So performance and cluster scaling is exceptional.With high performance VMs VMFS and NFS are on par as long as the link speed is not the bottleneck (ala 1GbE with NFS). However, some applications require a LUN in order to function or receive support.If one wants to leverage storage virtualization one needs to understand that with VMFS the virtualization happens at the LUN level and is not visible within vCenter. With NFS, storage virtualization is at the VM level and is observable within vCenter. As an example when one deduplicates a VMFS and a NFS datastore, the savings are identical; however the VMFS datastore is unchanged while the NFS datastore returns capacity to use for additional VMs.I guarantee you if you have a large number of VMs to address (say 500 or more) leveraging both protocols is the best.VMDKs on VMFS have more options thin, thick, eager-zero thick. So the question is when to use, why use, and what do I have to do to change the mode? With NFS VMDK types are controlled by the NFS server/array. With NetApp all VMDKs are thin and provide support for cluster services like FT (w/o being eager zero thick).Its late, and I'm not sure I've been as clear as I'd like in this disisuscon; however, my view is a multiprotocol storage array is the best fit for a multiprotocol hypervisor.If you only have VMFS, everything will work great and you may have to manage a few more datastores for the consolidated VMs. If you only have NFS, then you will run into some grey areas around LUN only solutions.(note this last point isn't really an issue as NFS is Ethernet, and Ethernet also simultaneously offers iSCSI and FCoE so there is not limit). http://mnmhyder.com [url=http://efoazblndg.com]efoazblndg[/url] [link=http://sjskwpqvms.com]sjskwpqvms[/link]

War dieser Kommentar hilfreich für Sie? 
Sahidin
Gesamtwertung 
 
0.5

When dealing with VMware be sure to adsujt the MTU on both the vSwitch and VMkernel interface. Changing only the vSwitch is similar to changing the physical switch MTU but not the hosts, it won't hurt anything but the ESXi hosts won't send packets larger than 1500.I'm a proponent of enabling jumbo frames, but I've also seen a lot of environments with issues due to inconsistent configuration. For example, jumbo frames enable on only some of the hosts. In my experience this results in occasional hangs in vMotion or NFS datastores going offline when accessed. Just something to watch out for.

War dieser Kommentar hilfreich für Sie? 
Alle Kommentare ansehen

Autoren-Info Ernährung

Corinne MäderCorinne Mäder

European Sport Nutrition Manager bei PowerBar Europe GmbH

Corinne Mäder ist certified Sports Nutritionist from the International Society of Sports Nutrition (CISSN) und hat Ernährungswissenschaft- und beratung studiert. Derzeit absolviert sie ein Aufbaustudium des International Olympic Committee's (IOC) im Bereich Sporternährung. Sie steht seit mehreren Jahren Profi-Athleten und Hobby-Sportlern beratend zur Seite.

© Copyright dieses Artikels: Corinne Mäder