Since the announcement of the RTX technology back in SIGGRAPH of 2018, the graphics world has been turned on its head. There are people doing work in AI, visual effects, and engineering stringing these cards together and coming up with results in a fraction of the time as before. The one TitanRTX in my workstation rates on the OctaneBench equivalent to six GTX 1070 Ti’s.
Inevitably, we would have the RTX technology available in laptops. All of that power smashed into a 15” wide, ½” thick chassis. That time is here.
Many developers are working with NVidia to implement the RTX cards into their laptops/notebooks. The one we are looking at today is the Razer Blade 15 Gaming Laptop with a GeForce RTX 2080 with Max-Q Design, sporting 8GB of GDDR6 RAM. The processor is 6 core/12 proc 8th Gen Intel Core i7-8750H at 2.2Ghz with boosts up to 4.1Ghz. 16 GB of RAM. And a 512GB SSD.
The aluminum chassis makes this small laptop surprisingly dense. But the material choice isn’t just for strength. The entire frame acts like a heatsink for the GPU and CPU, distributing the heat and keeping things working, while preventing burnt laps. And indeed, even during intense renders, I could hold the laptop in the palm of my hand.
The 144Hz monitor is crisp with luscious colors — which should satisfy most gamers. Sadly, the resolution is 1920×1080 — an upgrade to a Razer Blade Pro 17 will get you a 4K touch screen.
You get three USB 3.2, a Thunderbolt, HDMI, and a MiniDisplayPort. I don’t see how this can all fit in the slim profile. I mean, the speakers are on top of the I/O ports with absolutely zero clearance. The sound quality it’s surprisingly bold given the size of the speakers, with a detectable stereo separation between the 13” spans between the left and right. You lose some bass, but what do want from a speaker the size and thickness of a stick of gum. The sound is great, but don’t mix your film score or beats with them.
But this isn’t why we are here. We are here for performance. So let’s get to it.
I don’t have a library of different cards to run speed tests on, so I’m using what I have available:
My personal laptop is an HP Zbook 15 G3 with an NVidia M1000M with 2GB of GDDR5 RAM. 16GB of RAM for the system. Intel i7-6700HQ CPU 2.6 Ghz with 4 cores/8 procs. I use it frequently onset and in meetings running Max/Maya with VRay as well as PremierePro. It gets the job done.
My workstation is an HP z820 with an NVidia Titan RTX with 24GB of GDDR6 RAM. 64GB of RAM for the system. Two 3.10Ghz 8 core/16 proc Intels. This is my primary workstation for anything I do at home. It could be beefier, but it plows through a lot.
First test was using OToy’s OctaneBench to get some numbers right away to rate where the systems stand. The baseline for OctaneBench is an NVidia GTX 980 rendering the provided sample scene in their Octane renderer. 100 is the speed of the GTX 980. Below is slower. Above is faster.
My ZBook with its M1000M sadly scored a total of 24.9. But to its credit, there is no RTX Turing cores to help it along.
The TitanRTX not-surprisingly fared the best with a score of 322.183 — with RTX turned off. And a scorching 875.733 with RTX turned on — a 2.7x difference between on and off. And 35x speed difference from the M1000M.
The Razer’s 2080 RTX was in between the two. But not halfway between the two. It clocked in at 157.46 with RTX off. About half of the Titan, but 6.3x faster than the M1000M. With RTX turned on, its jumps 2.9x to 456.57. Still a little over half of the Titan, but we must keep this in context. These speeds are in a laptop that is just slightly thicker than an iPad.
|M1000M||2080 RTX||TITAN RTX|
|RTX ON||N/A||456.57 (2.9x faster)||875.733 (2.7x faster)|
The PremierePro test consists of three sequences, each 4K with variations of GPU accelerated effects.
The Intro Sequence is a simple two layer comp with a Title Graphic on top. There are the following effects: lens distortion, gaussian blur, mosaic, find edge and then a rotation transform.
The Final Adjusted_MPE is an edit of a music video utilizing PremierePro’s adjustment layers. The adjustment layers are scaled video, luma curve adjust, fast blur, noise, tint, RGB curves, black & white, image blending, and video overlay.
BigMix takes the above sequences and nests them into a simultaneous 4-up display, one panel having three more streams in a picture-in-picture effect. All streams are 4K playing at the same time.
Each sequence was tested playing live in the program window, and after, prerendered to cache. The two methods were set up this way because the RTX cards allow for realtime playback at 23.976 — but they are capable of playing back faster. The render to cache, gives a more accurate time of how fast the cards can process the footage.
As for the M1001M in the ZBook. It just rolled its eyes at me and said “Whatever, man.”
|M1000M||2080 RTX||TITAN RTX|
|Intro Sequence (live)||6.7 fps||23.9954 fps||23.85 fps|
|Intro Sequence (render)||total 89 sec/avg 16 fps||total 19 sec/ avg 75.74 fps||total 19 sec/ avg 79.94 fps|
|FinalAdjusted_MPE (live)||1.07 fps||23.976 fps||23.976 fps|
|FinalAdjusted_MPE (render)||total > 02:41:52 hr||total 161 sec/ avg 28.22 fps||total 160 sec/ avg 28.4 fps|
|BigMix (live)||2.96 fps||22.74 fps||23.22 fps|
|BigMix (render)||total > 03:05:52 hr||total 177 sec/ avg 26.07 fps||total 161 sec/ avg 30.36 fps|
The team at RED released a new version of REDCINE-X taking advantage of the CUDA-accelerated decode of the RED R3D footage. Previously, REDCINE-X (and RED SDK) was GPU-accelerated for the debayering process, which takes the little red, green, and blue pixels recorded in the mosaic of the sensor and combines them into a pixel representing the recorded color. But before the debayering, it needs decoding.
When you film on a RED, the imagery is compressed. Lower ratios mean higher quality, but larger files. Higher ratios are lower quality, but smaller files. That compression has to be decompressed, a very computationally heavy process. The workflow for editors usually includes transcoding the RED files to a smaller format — usually ProRes, and then smaller proxies are made so the editor can cut the piece together and watch in real time.
Now that the decoding process is thrown to the RTX card, you get real time playback of the RED footage with REDCINE-X.
With the provided 6K footage of an over-cranked tiger recorded at a 9:1 ratio, both the TitanRTX and 2080 RTX had no trouble playing back at 23.976.
The Razer Blade’s GeForce 2080 RTX handled the 8K pretty well, maintaining about 21 fps. I gave it as much system and GPU RAM as a could so that REDCINE-X could create a buffer in RAM that the card could read from. And RAM was filling at about the same rate that the footage was playing back. The TitanRTX, the playback was 23.976 — the difference being that it has a much larger frame buffer.
My ZBook can’t deal with the 8K or 6K at full resolution. But bringing it down to ⅛ managed to get it working without too much trouble.
One takeaway: Don’t shoot 8K on a RED at such a low ratio — that will reduce the amount of data to be moved around and processed.
|M1000M||2080 RTX||TITAN RTX|
|6X Footage||< 1fps||23.976||23.976|
|8X Footage||<1fps||21.08 fps||23.98*|
Blackmagic’s DaVinci Resolve is one of the key color grading tools in the film and television industry.
Ideally, when grading your footage, you should work with the RAW, basically, the media right from the camera. So, this means that you shouldn’t use proxies for your grading. And this means a lot of processing power.
For this test, we have a couple clips at 4K. Grading is all about processing — and quite heavy processing. The first clip is using OFX: Light Rays, a gaussian blur with a mask, another gaussian blur, and then an OFX Glow. The second clip utilized a gaussian blur with a mask, OFX Light Rays with a mask, a primary color correction, and a curves adjustment.
Both RTX cards were champs with the 2080 only marginally slower than the Titan. Although, extrapolating those numbers out to much more intense processes would probably give you the 2080 at roughly 75% of the Titan. Still, it’s super impressive for being inside a laptop.
Like PremierePro, we test the playback and then the render. And yeah, the M1000M just said “Ran out of GPU Memory” — but it still processed.
|M1000M||2080 RTX||TITAN RTX|
|playback||4 fps||23.976 fps||23.976 fps|
|render||145 sec||total 40 sec/avg 17 fps||total 29 sec/avg 22 fps|
MAYA + ARNOLD
The ArnoldGPU Beta benefits from the RTCores to speed up its Monte-Carlo-based render times. It also uses the NVidia Optix denoiser to smooth out noise based on deep learning. The neural network is learning about your scene as it renders, and can figure out where it should denoise. This process is being accelerated by the Tensor Cores on the GPU cards. So you are getting two calculations for the price of one.
The test consists of a robot character with glossy reflections inside an interior HDR along with six or seven area lights. The render times are taken from GPU mode and CPU mode, rendering to the Arnold Render View. The same two renders (GPU/CPU) are launched in the standalone Arnold from the command prompt. This provides a more accurate render time than the interactive window because the goal of the standalone is to “render to completion”, while the interactive is “render to presentable”. Plus you have the benefit of the denoiser in interactive mode to get to “presentable” even faster.
I left the M1000M out of this race. But as you can see from the results, the TitanRTX and 2080 RTX are performing in first place and second place respectively. But the differences aren’t crazy. Extrapolating them out to substantially longer renders will expand those differences.
Going from GPU to CPU is mind-boggling. The z820 tries to perform admirably. But the Razer difference is a little crazy. For one, the dual CPUs in the z820 are going to out-perform the single CPU in the Razor Blade. Even if the individual CPUs may be older than the Razor’s. But another factor, is that the Razor’s system may be clocking down its CPU as a safety precaution. As the CPU works this hard, it’s going to get hotter, and in the confined space of the super thin laptop, this becomes dangerous for the laptop and dangerous for laps. Slowing it down keeps it cool, but also slower render times.
|M1000M||2080 RTX||TITAN RTX|
|MAYA + ARNOLD|
|SOL (GPU)||n/a||00:59 sec||00:37 sec|
|SOL (CPU)||n/a||21:46 min||08:39 min|
|SOL Kick (GPU)||n/a||02:48.873 min||01:33.762 min|
|SOL Kick (CPU)||n/a||16:55.824 min||06:22.241 min|
This may sound like a condemnation of my ZBook. Or even possibly, that the Titan RTX is better than the GeForce 2080 RTX. But this isn’t what I’m saying at all.
This ZBook does everything I need it to do. Its beefy enough to get things done at the level I need them done.
My z820 is 5-years-old and is still a strong machine — even more so with the Titan RTX in it. This is where I do most of my heavy lifting. But, its damn heavy. This is where this Razer Blade comes in. When I have to be onset working directly with other creatives. This little laptop now gives me the opportunity to present refined renders of models without having to caveat things by saying “don’t worry that its gray” or “just ignore the stuttering”. Being able to see things and change things for the client, in near real time, is a brave new world.
Editing with RAW footage is also a game-changer. Onset, an editor can immediately take the footage, offload it to an external RAID, and then start cutting, informing the director if what they are shooting it working. Larger productions DO have this, and along with it a bunch of equipment. But now, with a little laptop and an external drive, you can be as mobile as you need to be.
After this review, I just may have to run out and get one of these Razers — probably the Pro 17”.