SiPearl working on a giant 72-core processor for supercomputers

A Twitter post by a French politician inadvertently revealed details about SiPearl’s upcoming high-performance processor powered by Arm’s Neoverse Zeus cores. 

The SiPearl Rhea system-on-chip will be used for an experimental ‘towards exascale’ supercomputer platform, with Rhea’s successor, SiPearl’s Cronos, used for the first European exascale supercomputer. 

The SiPearl Rhea SoC is based on 72 Arm Neoverse Zeus cores and to that end will be able compete against Amazon’s Graviton 2 and Ampere’s Altra processors. The processor also features 68 mesh network L3 cache slices along with various IP blocks. Frequencies of the SoC as well as cache sizes are unclear. 

European supercomputer chip

One of the particularly interesting features of SiPearl’s Rhea is its memory subsystem that relies on four HBM2E stacks that promise to offer a very high memory bandwidth as well as four or six DDR5 SDRAM controllers for extra memory capacity. Since the chip uses 72 cores, it needs a lot of RAM, so usage of a hybrid memory subsystem is justified. 

Today’s most powerful supercomputer — Fugaku — is based on Fujitsu’s custom 48-core A64FX processor (based on the ARMv8.2-A architecture) that also uses HBM2 memory, but does not use more traditional DDR4. 

The SoC is set to be made using TSMC’s N7 manufacturing technology, according to the Twitter post, but earlier SiPearl indicated that it will use TSMC’s N6 fabrication process that uses extreme ultraviolet (EUV) lithography. 

SiPeark expects to launch its Rhea processors sometimes in 2021 and use them to build an experimental pre-ExaFLOPS-class machine. An even more advanced family of SoCs named Cronos is set to emerge in 2022 or 2023 and they will power exascacle supercomputers developed in Europe.

Alleged AMD Radeon RX 6000 engineering sample spotted

A photograph showing an engineering sample of the alleged RDNA2 graphics card has been published online.

Is this Big Navi?

A picture of a mysterious engineering sample has emerged on Bilibili. It is not possible to determine which graphics processor does the graphics card has, but one could guess this might be one of the upcoming Big Navi variants (or any other NAVI 2x family processor at this point). The timing for this leak is definitely interesting.

Some labels have been attached to the board, such as ‘Typical Samsung 16Gb‘ indicating the memory type used for this board. This means that each module is 2GB. The leaker claims that the card has 3+3+2 modules (8 in total). This would imply this model has a 256-bit memory interface.

Another label says ‘Full ???? Typical XT A0 ASIC’. The XT variants are usually reserved for the Radeon RX series, meaning that this should be a sample of a gaming card, not a Radeon Pro.

The board is covered in dip switches and various voltage measurement points, it is indeed far from a retail product. There is even a huge CPU-type cooler is attached to the graphics processor. To the left, we can also see LED load indicators and at least two power connectors (one being 8-pin).

AMD today announced that it will host a Radeon RX 6000 event on October 28th. The company promised to reveal more information on the next-generation RDNA2 graphics cards. The manufacturer did not confirm the launch date yet, but the series should be expected to officially debut in November.

The Armari Magnetar X64T Workstation OC Review: 128 Threads at 4.0 GHz, Sustained!

Blitzing around a race track in a fast car only ever convinces you of one thing: I need to go around the track even faster. I need a better car, I need a better engine, better brakes, or better tires. I need that special go faster juice, and I need to nail the perfect run. The world of professional computing works the same, whether it comes down to rendering, rapid prototyping, scientific compute, medical imaging, weather modelling, or something like oil and gas simulation, the more raw horsepower there is, the more can be done. So enter the new Armari Magnetar X64T – an overclocked 64-core Threadripper 3990X that holds the new SPECworkstation3 world record. We got hold of one. It’s really fast.

Playing with Performance

AMD’s Threadripper 3990X is one of those crazy processors. It comes at you with some of the best of any processor statistics: it has 64 cores and 128 threads, it has 256 MB of L3 cache, it has a TDP of 280 W, which allows for a 2.9 GHz base frequency up to a 4.3 GHz turbo. It is overclockable, and so with the right system those frequencies can go even higher. With the best binned 7nm chiplets, paired with quad-channel DDR4-3200 memory, for multithreaded workloads it is one of the ultimate powerhouses anyone can build in a single socket with a socketable processor.

In our initial review of the Threadripper 3990X, it blitzed any software that could take advantage of all those threads – the nearest competitors were the 32-core Threadrippers, or Intel’s 28-core Xeon-W processors. We even put it up against two of Intel’s $10000 28-core Xeons, and it won pretty much everything by a large margin.

So what happens when we overclock it? There are those that want more, and not just those overclocking for fun – workstation customers, like animation studios, are always looking for ways in which they can rapidly render frames for upcoming projects. If a cooling system can be built to withstand it, and the power is available, then there’s always scope to get more out of the hardware that comes from the big players. This is what the Armari Magnetar X64T Workstation is designed to do – get more.

To that end, today AMD and SPEC is announcing that the Magnetar X64T workstation, a system that you can buy, will off-the-shelf give the best performance in SPECworkstation3 ever seen.

The Magnetar X64T: Performance Reimagined

The key highlight from this review, should you not read any further, is that this system is built to blitz workloads. The Threadripper 3990X is usually fast enough in its own right, but Armari have gone above and beyond. The goal of this system is to be an off-the-shelf powerhouse that requires very little setup from its customers.

Armari, perhaps a lesser well known system integrator, is a company that has in recent years focused on building systems for 3D animation, video editing, and scientific research. With over 20 years of experience, Armari’s hardware has gone into high performance computing solutions and clusters that have featured in the TOP500 lists, as well as rendering server farms for the top animation, VFX, and CGI studios in Soho, London.

These are clients who want the best performance, and Armari positions itself not so much as a boutique system builder, but something between the big OEMs (like Dell/HP) and the main retailers to offer custom solutions by leveraging its network of cooling and hardware contacts around the world. This enables the company to build custom chassis, obtain optimized memory, order power supplies with custom connector configurations, and ensure consistency from batch-to-batch when ordering from its partners. In speaking to Armari’s Technical Director Dan Goldsmith, he mentioned that working with partner companies for so long has enabled them to get access to rapid prototyping and component consistency with continual feedback with partners such as EKWB, ASRock, Tyan, and many other ODM companies that Armari leverages on a regular basis. 

The Magnetar X64T, I was told, leverages the strong relationship Armari has with AMD. The Opteron was a popular range a decade ago, and that partnership has been maintained through today. The goal of the Magnetar project was to create a system that offers the best that Threadripper has to offer while still enabling the under-the-desk workstation platform. This project has been slightly delayed due to COVID, and AMD now has Threadripper Pro, but those processors are not overclockable – for those that want raw performance, AMD and Armari believe they are on a winner.

The key to the system is in how Armari is cooling the processor, and the choice of components. The Magnetar X64T features a custom water cooling loop, which is perhaps not anything new in its own right, however the company has created a component chain to ensure consistency in its design, as well as using some of the most powerful options available.

The water block is probably the best place to start, because this is a completely custom-for-Armari design built in partnership with EK Water Blocks. This block is specifically built for this one motherboard, the ASRock TRX40 Taichi, and applies cooling to both the processor and the power delivery. The block works in conjunction with the highest-quality thermal paste pads on the market, to ensure a flat connection with the water block. As it also covers the power delivery, Armari worked with ASRock to enable a consistent z-height of all the power delivery components, something that can vary during manufacturing, and maintain that consistency on a batch-by-batch basis. Pair this up with Armari’s custom FWL liquid cooling pump, reservoir, tubing, 3x140mm radiator, and fan combinations (many of which are custom from their respective ODMs), and we have a cooling capacity in excess of 700 W. The coolant is a special long-life coolant designed for 24/7 over three years, and the standard warranty comes with service during those three years, including collection and return, at no extra cost.

Now, the ASRock TRX40 Taichi isn’t the top Threadripper motherboard on the market, and Armari fully admits that, however it points out that the best motherboard available costs twice as much. In working with ASRock, they were able to co-ordinate what was needed within the discrete motherboard component lists as well as enable a custom BIOS implementation for additional control. One of the tradeoffs I was told about is that a cheaper motherboard might mean slightly cheaper components, however Armari says that their cooling system and setup were co-operatively tuned to meet its customers’ demands.

With this cooling arrangement, Armari have fired up the overclock. In our initial review of the Threadripper 3990X, we were observing ~3450 MHz during our sustained running with the CPU reaching its full 280 W. For the Armari Magnetar X64T, we have an all-core frequency from 3950-4100 MHz, depending on the workload. Users might scoff at the +400-550 MHz lift, but bearing in mind this is across all of the 64 cores simultaneously, and the cooling is built such that this frequency is sustained for renders or simulations that might take days. Further details of frequency and power later in the review.

While having the overclocked CPU is great, the Magnetar X64T system we were delivered also had memory, graphics, and storage.

The system as shipped came with an PNY NVIDIA RTX6000 graphics card, which is essentially an RTX 2080 Ti on steroids with 24 GiB of GDDR6, and the system can be configured with two of them. As Threadripper is not an ECC-qualified platform, the X64T comes with the peak configuration supported, 256 GB, but with custom SPD profiles to run up to DDR4-3600. Unfortunately due to how quickly this system was rebuilt for this review, the system I was sent was using DDR4-3200 at CL20, as some of the original memory was accidentally splashed with coolant, and Armari wanted to ensure I wouldn’t have any issues with the system.

Storage comes in two forms, both of which are PCIe 4.0. As shipped, we were specified with a boot drive to the tune of a Corsair MP600 1 TB PCIe 4.0 x4 drive. Another two of these drives were provided inside an ASRock Hyper M.2 PCIe 4.0 card, plugged into one of the PCIe 4.0 slots. Armari says that as newer and bigger PCIe 4.0 drives come to market beyond the Phison E16 solutions, this should expand to higher capacity drives or faster drives as required.

The power supply is a fully custom 1600W 80PLUS Gold unit, rated to run at 50 ºC with 93% efficiency. It has a custom fan profile directly from the OEM, and is set to only stir up the fans if the power required goes above 40% (640 W). The fully modular PSU has nine 8-pin connections and five 6-pin connections, providing 14 total, should any customer want to go above and beyond. The PSU on its own has a 10-year warranty.

The motherboard has a 2.5 GbE wired network port and a 1 GbE wired network port, and Armari does offer a 10G upgrade (space permitting based on the PCIe slots). Wi-Fi 6 support comes as standard, as does the ALC1220 audio configuration.

The chassis is the last custom part to discuss, with the system featuring the Magnetar naming on the front with the Armari logo. The chassis is big, but quite standard for a high-end workstation platform: 53cm x 22cm x 57cm (20.9-in x 8.7-in x 22.4-in), with a typical single GPU weight of 18 kg (39.7 lbs).

The chassis comes with handles on top that fold away, making the system easy to move around as required. I love these.

Inside there is lots of ‘space’ for directed airflow. The pump and reservoir is found in the bottom of the case, underneath the standard drive-bays, while the 3x140mm double thick radiator is at the top built into the side of the chassis. This is a special hinged mount, which makes the side panel easy to remove and the cooling apparatus easy to inspect.

There is a PCIe retention bracket for any add-in card installed, and in the base of the chassis is the power supply, hidden away. The insides weren’t necessarily built to look aesthetically pleasing, however the system as provided by Armari has a nice clean look.

Due to a couple of issues with arranging this system for review, I was told that normally Armari adds in some custom sealant to help with the liquid cooling loop, however as it requires 24 hours to set, they weren’t able to in this instance. The liquid cooling loop is pre-tested for every system they build at over 1 bar of pressure, along with full stability testing and thermal testing before shipping. For any reason if a system needs to be returned for warranty, Armari can supply a loaner system if required. As mentioned above, the standard warranty includes one full service and inspection, and the coolant can be replaced in order to give the customer another 3 years of ‘hassle free’ operation.

The News Today: World Records

Today AMD and Armari are announcing that the new Magnetar X64T has set a new world record in the SPECworkstation 3 benchmark. The system that achieved this test is, by and large, the system I am testing today (it was stripped down and rebuilt with an updated water block). For the customers that Armari typically services this one of the primary benchmarks they care about, and so getting a new world record for a commercially available system should put Armari’s offerings high on their list.

Our testing, as shown over the next few pages, is similarly impressive. We already saw that the Threadripper 3990X with no overclock was a formidable beast in almost every one of our rendering and compute workloads. The only real comparison point we have to compare against is our W-3175X workstation that was provided when we reviewed that system.

The Magnetar X64T-RD1600G3 FWL (the full name) system in our testing is ~£10790 ($14200) excluding tax . This includes a Windows 10 Professional 64-bit license, and Armari’s 3 year premium workstation warranty, with 1-year on site and 2/3rd year parts and labor, along with a loaner system for the duration of any repairs.

Read over the next few pages for our testing on Performance and Power.

Rumored Nvidia GeForce RTX 3060 Specs Hit the Sweet Spot

Nvidia might have already announced its GeForce RTX 30-series graphics cards, but the Ampere rumor mill never rests. Hardware insider @kopite7kimi, who was spot on with Ampere’s specifications before launch, just revealed alleged specifications for another undetermined SKU that could well earn a spot on our Best Graphics Cards list.

According to the tweet, the leaker doesn’t know if the card in question is the GeForce RTX 3060 Ti or GeForce RTX 3060 Super. In either case, the graphics card will reportedly leverage the GA104 silicon, similar to the the one that’s confirmed to be inside the GeForce RTX 3070. More specifically, according to kopite7kimi, the GeForce RTX 3060 Ti or GeForce RTX 3060 Super would land with the GA104-200 die and 4,864 CUDA cores.

As a quick recap, each streaming multiprocessor (SM) in Nvidia’s Ampere architecture houses 128 CUDA cores, one RT core and 4 Tensor cores. If the core count is accurate, the GA104-200 die has 38 enabled SMs, amounting up to 38 RT cores and 152 Tensor cores.

On the memory side, the GeForce RTX 3060 Ti or GeForce RTX 3060 Super allegedly has 8GB of GDDR6 memory at its disposal. The memory speed and interface remains a mystery. However, the amount of memory insinuates a memory bus of 256-bit, like the GeForce RTX 3070. The GeForce RTX 3070 is rumored to use 14 Gbps memory, so it’s possible Nvidia is looking to give this unconfirmed card slower GDDR6 memory chips to differentiate it from the RTX 3070 in terms of memory bandwidth.

Release Date? 

As per Nvidia’s Ampere launch schedule, the GeForce RTX 3090 and GeForce RTX 3080 come out on September 24 and 17, respectively, while the GeForce RTX 3070 arrives at an unspecified date in October. 

AMD is expected to announce Big Navi on October 7, so if the rumored GeForce RTX 3060 Ti or GeForce RTX 3060 Super is real, it’s likely Nvidia’s waiting to unveil it until after it sees what AMD drops, suggesting a potential November launch for the GeForce RTX 3060 Ti or GeForce RTX 3060 Super.

ASRock Launches Ryzen 4000 Mini PC With 2.5G Networking

ASRock Industrial Computer’s latest 4X4 BOX-4000 series of Mini PCs is here to disrupt the small form factor (SFF) market. Available with three different processor options, the 4×4 BOX-4000 series targets both home and business users.

The 4X4 BOX-4000’s plastic enclosure has a 4.3 x 4.6 x 1.9-inch (110.0 x 117.5 x 47.9mm) footprint and weighs just 2.2 pounds (1kg). The mini PC can sit comfortably on your desk or you can mount it behind your screen, thanks to the included VESA mounting bracket. 

The 4X4 BOX-4000 employs AMD’s latest Ryzen 4000-series (codename Renoir) APUs that bring all the advantages of the Zen 2 microarchitecture into a confined space.

Casual users will probably suffice with the Ryzen 3 4300U, which is a quad-core chip without simultaneous multithreading (SMT). The Ryzen 5 4600U sports a six-core, six-thread setup for users that desire more firepower. 

However, demanding users and businesses will probably opt for the Ryzen 7 4800U that comes equipped with eight CPU cores and 16 threads of Zen 2 power.

ASRock sells its 4X4 BOX-4000 series as barebones systems, meaning other than the included processor, you’ll have to outfit the device with your own hardware. 

The 4X4 BOX-4000 has two SO-DIMM DDR4 RAM slots and accepts up to 64GB of memory. DDR4-3200 memory modules are natively supported on the Ryzen 4000-series processors. For storage, the mini PC provides a M.2 2280 slot that adheres to the PCIe 3.0 x4 interface and accommodates both PCIe-and SATA-based drives. It also has the necessary spacing for as single 2.5-inch hard drive or SSD.

The 4X4 BOX-4000 might be small, but it has all the features you would expect in a normal desktop PC. Starting with connectivity, the device provides one USB 3.2 Gen 2 Type-A port, two USB 3.2 Gen 2 Type-C ports and a combo headphone and microphone jack on the front panel. The rear panel houses one HDMI 2.0a port, one DisplayPort 1.2a output, one Gigabit Ethernet port, one 2.5 Gigabit Ethernet port and two USB 2.0 ports.

The 4X4 BOX-4000 supports up to four displays with 4K resolution simultaneously, thanks to the HDMI 2.0a port, DisplayPort 1.2a output and two USB 3.2 Gen 2 Type-C ports. The Ryzen 4000-series’ renovated 7nm Vega graphics engine does all the heavy lifting graphical-wise.

The Realtek R8111FPV powers the conventional Gigabit Ethernet port with DASH support, while the Realtek RTL8125BG is responable for the 2.5G Gigabit Ethernet port. 

If you hate cables, you’ll be glad to know that the 4X4 BOX-4000 features Intel’s Wi-Fi 6 AX200 wireless module, meaning you get to enjoy Wi-Fi 6 speeds and Bluetooth 5.1 connectivity.

ASRock didn’t publicly share the pricing or availability of the 4X4 BOX-4000 series. However, interested parties can request a direct quote from the brand through the online form.

Samsung’s Note20 Ultra Variable Refresh Rate Display Explained

In August 2020, Samsung launched the new Note20 Ultra – an interesting device that we have on our review test bed. It’s safe to say that over the last few generations, there hasn’t been all that much exciting about the Note line of devices – the phones typically use the new silicon and camera technologies that were introduced in the Galaxy S-series of the same year, and the Note lends on its form factor, only improving upon the design and software experience around the S-Pen. This year’s Note20 Ultra, based on our testing, generally also follows the same formula, but with the important exception: the Samsung Note20 Ultra has, according to the company, the first mobile variable refresh rate (VRR) screen in the industry.

What is Variable Refresh Rate, or VRR

The refresh rate, in its broadest definition, is a property given to a display with regards to how frequently a display will update itself with the latest data supplied from the graphics processor. A standard display, either on a smartphone or on a computer monitor, often refreshes at 60 times per second, or at 60 Hz, with the delay between each frame at 16.66 milliseconds. This 60 Hz is a static refresh rate, and fixed for the lifetime of the product. Over the last decade, display manufacturers have built screens with different refresh rates depending on the content: for content that is static, the display could choose to refresh at 33.33 milliseconds, or 30 times per second, and save power; for content that is active, like a video game, if the game can be rendered quickly enough, the display could refresh at 13.33 milliseconds (75 Hz) or 11.11 milliseconds (90 Hz) or 8.33 milliseconds (120 Hz).

Displays can also be made with multiple different refresh rates. Depending on the product, such as a simple PC monitor, then both 30 Hz or 60 Hz might be supported. Gaming devices might go the other way, and offer modes that run at 30 Hz, 60 Hz, 90 Hz, and 120 Hz, all within the same panel. These modes might be user selectable, or activate when specific applications are running. In the gaming ecosystem, these are known as ‘high refresh rate’ displays.

Where variable refresh rate displays differ is that they can often support a wide range of frame time delays on a very granular basis. On the specification sheets for these displays, the refresh rate might be give as a range, such as ’60 – 90 Hz’, incidicating that the display can support any value between these two numbers. The better displays strive to support larger ranges, however when it comes to the smartphone market, the term ‘variable refresh rate’ has been a bit abused in recent times, as there are two ways to implement a variable refresh rate display.

The two methods are known as:

Seamless Variable Refresh Rate

Refresh Rate Mode Switching

The difference between the two is important. In a Seamless VRR display, the refresh rate is expected to change on a frame-by-frame basis as required by the system. For a ‘VRR-enabled’ but non-seamless display, it relies on changing the refresh rate mode between different values – the display panel will operate in either a “normal” or “high-refresh-rate” mode, but the switching between the modes is not a continuous process. For these panels, the ‘range’ of the refresh rates supported is fairly discrete, such as fractions of the main refresh rate, whereas a Seamless VRR display is designed to be a continuous support from a standard refresh rate to a high refresh rate with all in-between.

For the most part, smartphone vendors have been playing down which one of these two they have been using, advertising both as ‘variable refresh rate’. If a phone vendor has claimed to support variable refresh rate, it has been misleading, as no device until now has supported a ‘seamless variable refresh rate’ that switches on a per-frame basis, which is typically what we would consider a true VRR solution to be. What these companies are doing instead is that they are using refresh rate mode switching, which is a rather important distinction.

Samsung Note20 Ultra: Seamless VRR

By contrast, Samsung with the new Note20 Ultra claims to have achieved seamless VRR, and I’ve been very curious to get my hands on a device and finally unveiling how this is implemented and if it delivers on its promises.

Starting off, the first thing a user might notice on the Note20 Ultra, compared to an S20 device, is that its high-refresh-rate mode is called “Adaptive” rather than “High”. The decription text is specific in that it now states the refresh rate will go “up to” 120Hz instead of outright stating 120Hz on the S20 series devices. So far so good.

Investigating Seamless VRR

Digging into the software, we find some key indications on Samsung’s display mode options.

I/DisplayManagerService: Display device changed: DisplayDeviceInfo{“Built-in Screen”: uniqueId=”local:19261213734341249″, 1440 x 3088, modeId 1, defaultModeId 1, supportedModes [ {id=1, width=1440, height=3088, fps=60.000004}, {id=2, width=1440, height=3088, fps=48.0}, {id=3, width=1080, height=2316, fps=120.00001}, {id=4, width=1080, height=2316, fps=96.00001}, {id=5, width=1080, height=2316, fps=60.000004}, {id=6, width=1080, height=2316, fps=48.0}, {id=7, width=720, height=1544, fps=120.00001}, {id=8, width=720, height=1544, fps=96.00001}, {id=9, width=720, height=1544, fps=60.000004}, {id=10, width=720, height=1544, fps=48.0} ]

From a software perspective, you’d normally expect Samsung’s advertised refresh rate modes from 1Hz to 120Hz to be exposed to the system, however this is not the case, and the phone features the same resolution and refresh rate modes that were also available on the S20 series. As from the data above, this means 48 Hz, 60 Hz, 96 Hz, and 120 Hz.

However, the key difference between the S20 series and the Note20 Ultra is that its refresh rate mode is described as operating in “REFRESH_RATE_MODE_SEAMLESS” instead of “REFRESH_RATE_MODE_ALWAYS”. In that regard it does look like things are working correctly.

However one key component of variable refresh rate displays are the lower refresh modes to help save power. As shown on the list above, the ‘lowest’ refresh rate advertised is 48 Hz. So I went searching.

2020-09-07 19:42:16.764 948-948/? I/SurfaceFlinger: setActiveConfig::RefreshRate: ID=2, Width=1080 2020-09-07 19:42:21.758 948-948/? I/SurfaceFlinger: setActiveConfig::RefreshRate: ID=4, Width=1080

When interacting with the phone, it is possible to catch when the OS switches its refresh rates. For the above log, I was in the Samsung browser on a webpage – a situation I would expect to be in a high refresh rate when scrolling, but a lower fresh rate when idle, and a smooth seemless transition between the two. When I tapped the screen to interact with it and scroll, the system switched over to 120Hz refresh rate (represented with ID=2). Four seconds later, it switched back to a 60Hz mode (shown as ID=4). This is actually quite odd in that this really isn’t what you’d expect from a seamless VRR implementation – these would appear to be preset refresh rate modes baked in into the operating system and integrated with user interactions.

Perhaps more importantly from a battery life perspective, we would expect the switch down to the lower refresh rate to happen almost immediately, within a frame or two. The 4-second delay from the phone being in the 120Hz mode and then being placed into the 60 Hz mode, even though it’s a static screen, isn’t what we expect from a VRR implementation, seamless or otherwise – it should happen essentially immediately on the following frames of any kind of animation, interaction, or screen movement. This needed more investigation.

It All Comes Down To New Panel Technology

Researching things further, and diving into the display panel’s drivers, we find a few further mentions and evidence of Samsung’s newer panel technology found in the Note20 Ultra. First of all, we have confirmation that Samsung calls the new panel technology “HOP” – which we assume stands for the rumoured ‘Hybrid Oxide and Polycrystaline’ technology that Samsung has been teasing. This is similar to LTPO (Low Temperature Polycrystalline Silicon), but uses a new backplane technology that allows for faster switching transistors, also lowering power consumption.

Furthermore, Samsung’s key feature in achieving lower refresh-rate seems to be dubbed “LFD” or low-frequency-drive. At first, it’s a bit confusing as LFD doesn’t really seem to have any kind of interaction with the user-space VRR implementation. From our analysis, LFD seems to be something that solely works at the panel and display driver (DDIC) level.

Based on the output shown below, the LFD operating modes do showcase that it is programmed to work with Samsung’s advertised low operating frequencies, all the way down to 1Hz. The low frequency driver operation also seems to be a sub-mode underneath the higher level VRR operating modes, with these being the actual modes that the phone switches between in a finer manner using the MIPI-DSI interface.

/* 8. Freq. (60h): frequency in image update case (non-LFD mode), HS: 24hz~120hz, NS: 30hz~60hz * – 48HS VRR mode: * 48hz : 00 01 : div=2 * 32hz : 00 02 : div=3 * 24hz : 00 03 : div=4 * 12hz : 00 07 : div=8 * 1hz : 00 5F : div=96 * * – 48NS VRR mode: turn off LFD * * – 60HS VRR mode: * 60hz : 00 01 : div=2 * 40hz : 00 02 : div=3 * 30hz : 00 03 : div=4 * 24hz : 00 04 : div=5 * 10hz : 00 0B : div=12 * 1hz : 00 77 : div=120 * * – 60NS VRR mode: * 60hz : 00 09 : div=1 * 30hz : 00 01 : div=2 * * – 96HS VRR mode: * 96hz : 08 00 : div=1 * 48hz : 00 01 : div=2 * 32hz : 00 02 : div=3 * 24hz : 00 03 : div=4 * 12hz : 00 07 : div=8 * 1hz : 00 5F : div=96 * * – 120HS VRR mode: * 120hz : 08 00 : div=1 * 60hz : 00 01 : div=2 * 40hz : 00 02 : div=3 * 30hz : 00 03 : div=4 * 24hz : 00 04 : div=5 * 11hz : 00 0A : div=11 * 10hz : 00 0B : div=12 * 1hz : 00 77 : div=120 */

The driver comments note that the Note20 Ultra’s panel is capable of “self-scanning”, and that in order to maintain the lower frequency refresh rates it makes use of frame insertions for non-changing content. It looks like this is based on a fixed set of frequency multiples and dividers, so the mechanism isn’t capable of arbitrary refresh rates, but has a fixed set of operating frequencies below the maximum 120Hz. This ultimately puts it somewhere between the ‘mode switching’ and Seamless VRR definitions, however with the granularity it does offer a wider array of refresh rates for the display than almost all (if not all) smartphones on the market today.

One problem (from our perspective) with this LFD mechanism is that it is seemingly completely transparent to user-space, so there’s no good way to verify that it’s active or working – the OS simply states that you’re either in the 120Hz or 60Hz VRR modes, however with LFD on top the actual refresh rate can be different. One way to verify this externally is simply to measure the end-result that the new panel technology is meant to bring to the user: lower power consumption. It’s also here that we encounter some of the quirks in Samsung’s implementation.

Confirming Seamless VRR: Measuring Display Power Consumption

At first when I got my hands on the Note20 Ultra, I was somewhat disappointed when the results I obtained for power wasn’t any different to the S20 series between the 60 and 120Hz modes. Everything looked and measured the same, with a large power penalty kicking when switching over to the 120Hz mode, even on static screen content. This was the one scenario where the new VRR mechanism was supposed to bring great benefits.

I had reached out to Samsung Display about the matter, under the assumption that perhaps Samsung Mobile had not implemented the VRR as advertised. Initially I received back some questions asking me about the test conditions, among which they also asked about the ambient brightness, which I found weird thing to ask.

Sure enough, altering the ambient brightness in my office / the brightness level that the phone’s light sensor picks up, does dramatically change the power behaviour of the phone. Here is a video showing the effect of the ambient brightness adjusting the power used by the display, where I cover the light sensor with a block.

When displaying a pure black static image in the phone’s Gallery app, I saw a drastic change in power consumption between when the phone is in a brighter environment compared to when you cover up the top part of the device and the light sensor.

Looking into more detail through the phone’s OS logs, the device does look to actively track the light sensor values all the time, even when in manual brightness, and enters a special mode when it senses a darker environment:

In particular, it looks like whenever the phone senses an ambient brightness level below 40 lux, it will force the phone to only operate in its 120Hz modes, with an additional flag that also sets the minimum refresh rate to 120Hz. By contrast, in a higher brightness setting, the “normal” operating mode has what looks to be a minimum of 48Hz.

The power behaviour measured on the phone now seemingly makes a lot more sense, and in a little “D’oh” moment I also realised that when I first measured the device the phone had this low brightness flag all the time as it was measuring below 40 lux in my office. It turns out that the time of day you work in, and the brightness of whereever you work, will now affect the power consumption of the display on your phone.

Measuring the base power consumption of the phones again, under different lighting conditions, we see the first factual evidence of Samsung’s new VRR/LFD benefits:

When in a dark environment, and forced into the 120Hz mode, the Note20 Ultra’s power consumption isn’t all that different from the S20 series (I’m still not sure why the Snapdragon S20U here fares so badly). This means that there is a large ~180mW power penalty that is present at all times, even on a black static screen, because of 120 Hz. That penalty comes from the measured power, with 640 mW and 465 mW in the respective 120 and 60Hz modes.

When under a little brighter ambient conditions, the panel is finally allowed to showcase its technology advantages, and power consumption drops drastically. In the 120Hz mode but with the minimum refresh rate now in the regular ’48 Hz’ setting, the power figure drops from 640mW to 428mW, which is a massive 220mW drop.

The 60Hz mode also seems to see a power benefit as well. In our tests, the power consumption drops from 465 to 406 mW. This would indicate that indeed LFD is working in the background and reducing the panel’s refresh rate to below 60Hz – although we have no way to accurately measure exactly how low it goes.

It Also Depends On The Content

As noted, having the phone operating under dark conditions does look to disable the ‘seamless’ variable refresh rate display, and consequently the operation that allows the phone to go into lower frequency modes.

But that’s not exactly correct in all circumstances.

In the above video snippet, we see that the device still kicks back in into a low refresh rate as long as the on-screen content and brightness exceeds a certain level, even if the light sensor measures 0lux.

The problem is that this isn’t based on the specific screen brightness level that the user might select, or the auto-brightness might choose. It actually relies on the screen content too, which affects the screen brightness, as the phone will also jump between switching back into low-frequency mode or staying at a more power hungry high-frequency mode. In that instance, the the refresh rate mechanism is based on the average picture level (APL) as well.

The Effect on Battery Life

Based on everything we’ve learned so far, it comes to pass that there are now four corners to the battery life on the new S20 Ultra:

Set at 120 Hz in user options, low ambient brightness (low lux)

Set at 120 Hz in user options, high ambient brightness (high lux)

Set at 60 Hz in user options low ambient brightness (low lux)

Set at 60 Hz in user options, high ambient brightness (high lux)

Each variant, due to Samsung’s seamless VRR implementation (which is only seamless in high brightness and/or bright content), will give a different level of battery life. Make sure to ask your favorite smartphone reviewer which one they are using.

In terms of our battery life tests, in order to showcase the differences in the results for the Note20 Ultra, I first fell back to PCMark. Here we see some mixed results.

In terms of absolute figures, the phone doesn’t get great results – it comes below the S20 series devices. It is worth nothing that the Note20 Ultra has a 10% smaller battery than the S20 Ultra, as well as a different processor – the new Snapdragon 865+ in the Note20 Ultra is more power hungry and less efficient than the regular Snapdragon 865 in the S20 Ultra, but both of these aspects are something to be covered in the full review.

When it comes to the difference in the battery runtimes between running the phone in a dark or a bright environment, we do see differences, albeit they’re somewhat small. In the 60Hz maximum refresh mode, VRR/LFD gains the phone an additional 4% of battery life. In the 120Hz mode, we see a larger 8.5% jump in runtime.

Looking at the power draw graph in PCMark in the 120Hz modes, we do see a drop in power from an average of 1.937W to 1.796W at 200cd/m² screen brightness. The one thing to note here is that there’s no material difference during the video editing section of PCMark, with the power results been the two modes falling within 21mW. What this points out to is that in more regular non-video content the battery life gains could be larger than what’s experienced here in this battery test.

First Impressions

Overall, I think Samsung’s new display technologies, such as the VRR and LFD on the Note20 Ultra, make for a big leap in terms of the capabilities of high-refresh rate smartphones today. However there are quirks, and the biggest negative here is the fact that underneath a certain ambient brightness, Samsung disables its VRR. 

At a 40 lux ambient brightness, which is still relatively bright for indoor use-cases, we can see the VRR disabled. Users can download a lux meter app right now for their current smartphones and test it around the environment to see which scenarios apply and which don’t. Personally this is a bit disappointing, as the biggest advantage of an VRR implementation was to get rid of the fixed large power penalty of the 120Hz mode – a penalty which in absolute terms represented a bigger power draw percentage during lower brightness usage of a device. This is compared to high on-screen brightness conditions, where power draw is dominated by luminance power, where the 120Hz penalty represented a smaller percentage of total power, and thus represents a smaller gain thanks to VRR.

The good news is that Samsung introduced a technology that is, from our perspective, software agnostic. There are some interactions between the higher level VRR on the OS side and the lower-level LFD technology on the panel side that we wish there was more insight to at the user level, however it’s a nice technological leap which manages to narrow the power penalty of higher refresh displays, and in turn should further popularise the feature.

So the final question is as to why Samsung has this VRR limitation in low ambient light conditions? I have reached out to Samsung Display to find out why this limitation exists, but I haven’t heard back yet. I should note that ‘faking’ a bright environment via shining a light onto the light sensor didn’t result in any noticeable picture quality differences at any brightness – but I’m sure there’s some corner case out there which does result in some degradation as otherwise I can’t explain the limitation’s existence.

We will be following up with a full Note20 Ultra (Snapdragon S865+) review soon.

Lenovo releases first Fedora Linux ThinkPad laptop

For years, ThinkPads, first from IBM and then from Lenovo, were Linux users’ top laptop pick. Then, in 2008 Lenovo turned its back on desktop Linux. Lenovo has seen the error of its ways. Today, for the first time in much too long, Lenovo has released a ThinkPad with a ready-to-run Linux. And, not just any Linux, but Red Hat’s community Linux, Fedora.

Red Hat Senior Software Engineering Manager Christian Schaller wrote: 

This is a big milestone for us and for Lenovo as it’s the first time Fedora ships pre-installed on a laptop from a major vendor and it’s the first time the world’s largest laptop maker ships premium laptops with Linux directly to consumers. Currently, only the X1 Carbon is available, but more models are on the way and more geographies will get added too soon.

First in this new Linux-friendly lineup is the X1 Carbon Gen 8. It will be followed by forthcoming versions of the ThinkPad P1 Gen2 and ThinkPad P53. While ThinkPads are usually meant for business users, Lenovo will be happy to sell the Fedora-powered X1 Carbon to home users as well.

The new X1 Carbon runs Fedora Workstation 32. This cutting-edge Linux distribution uses the Linux Kernel 5.6. It includes WireGuard virtual private network (VPN) support and USB4 support. This Fedora version uses the new GNOME 3.36 for its default desktop.

The system itself comes standard with a 10th Generation Intel Core 1.6Ghz i5-10210U CPU, with up to 4.20 GHz with Turbo Boost. This processor boasts 4 Cores, 8 Threads, and a 6 MB cache.

It also comes with 8MBs of LPDDR3 RAM. Unfortunately, its memory is soldered in. While that reduces the manufacturing costs, Linux users tend to like to optimize their hardware and this restricts their ability to add RAM. You can upgrade it to 16MBs, of course, when you buy it for an additional $149. 

For storage, the X1 defaults to a 256GB SSD. You can push it up to a 1TB SSD. That upgrade will cost you $536. 

The X1 Carbon Gen 8 has a 14.0″ Full High Definition (FHD) (1920 x 1080) screen. For practical purposes, this is as high-a-resolution as you want on a laptop. I’ve used laptops with Ultra High Definition (UHD), aka 4K, with 3840×2160 resolution, and I’ve found the text to be painfully small. This display is powered by an integrated Intel HD Graphics chipset.

For networking, the X1 uses an Intel Wi-Fi 6 AX201 802.11AX with vPro (2 x 2) & Bluetooth 5.0 chipset. I’ve used other laptops with this wireless networking hardware and it tends to work extremely well. 

These days desktop Linux supports over 99% of the PC hardware. To close that gap entirely, Lenovo is working with its third-party hardware component vendors to make sure they supply the appropriate Linux drivers. Lenovo also uses the Linux Vendor Firmware Service (LVFS) to make sure there’s current firmware available for both their main system and all its components such as the X1’s 720p Webcam and fingerprint sensor.

The entire default package has a base price of $2,145. For now, it’s available for $1,287. If you want to order one, be ready for a wait. You can expect to wait three weeks before Lenovo ships it to you. 

It may be worth the wait. Schaller loves the new Fedora-powered ThinkPad. Schaller wrote:

I am very happy with the work that has been done here to get to this point both by Lenovo and from the engineers on my team here at Red Hat. For example, Lenovo made sure to get all of their component makers to ramp up their Linux support and we have been working with them to both help get them started writing drivers for Linux or by helping add infrastructure they could plug their hardware into. We also worked hard to get them all set up on the Linux Vendor Firmware Service so that you could be assured to get updated firmware not just for the laptop itself, but also for its components.

We also have a list of improvements that we are working on to ensure you get the full benefit of your new laptops with Fedora and Lenovo, including working on things like improved power management features being able to have power consumption profiles that include a high-performance mode for some laptops that will allow it to run even faster when on AC power and on the other end a low power mode to maximize battery life. As part of that, we are also working on adding lap detection support, so that we can ensure that you don’t risk your laptop running too hot in your lap and burning you or that radio antennas are running too strong when that close to your body.

So I hope you decide to take the leap and get one of the great developer laptops we are doing together with Lenovo. This is a unique collaboration between the worlds largest laptop maker and the world’s largest Linux company. 

This isn’t Lenovo’s only pro-Linux desktop move. Lenovo is planning to certify its entire workstation portfolio for top Linux distributions from Canonical and Red Hat — “every model, every configuration.” While that’s not every Lenovo PC — the Ideapad family isn’t included — that’s still impressive. It will also preload its entire portfolio of ThinkStation and ThinkPad P Series workstations with both Red Hat Enterprise Linux (RHEL) and Ubuntu Long Term Support (LTS).

With these moves, Lenovo is joining Dell, with its support for Ubuntu Linux-powered developer laptops, such as its latest Dell XPS 13 Developer Edition, in supplying programmers with high-end hardware. Dell already offers a wide variety of top-rank Linux-powered laptops and workstations. With Lenovo heading into the same space, we can hope for better and less expensive PCs from both companies. 

How to take screenshots in Windows 10

From keyboard shortcuts to built-in apps, there’s plenty of ways to take screenshots on a Windows 10 PC.

There’s any number of reasons why you may need to take a screenshot on a Windows 10 PC. You may need to send an error report to IT, capture graphics and images for a presentation, or create a tutorial on how to take screenshots in Windows 10. 

No matter the reason you’re trying to take screenshots in Windows 10, there are options. Microsoft didn’t make all of them super simple, and you’ll need to bring in additional apps like Paint to actually save the screen captures you take.

There are also a couple of Windows apps you can use to take screenshots if you’re averse to keyboard shortcuts or having to paste captured content from the Windows clipboard to a separate app for editing manually. Whichever way you prefer, there are options.

How to take a full screen screenshot in Windows 10 with keyboard shortcuts

The simplest, and most obvious way to take a screenshot in Windows 10 is probably the button that everyone has on their keyboard: Print Screen, which may also be labeled PrtScrn, PrSc, or some similarly abbreviated name. 

Pressing Print Screen doesn’t do anything obvious, so you’d be understandably confused if you were hoping a small thumbnail would appear in the lower right of the screen, ala macOS, indicating an image has been captured. Instead of actually taking a picture, Print Screen copies the contents of your entire screen to the clipboard, much like you highlighted some text and pressed Ctrl + C. 

In order to create the image you captured with Print Screen you’ll have to open Word, Paint, or some other program where you can paste an image file and then save it with the name and extension of your choice. 

If you want to skip the copying and pasting, you’re in luck: Pressing Windows Key + Print Screen will capture the entire screen and automatically save it as an image. You’ll also get some visual feedback indicating it’s been captured, as the screen will briefly dim. 

In order to find images captured using Windows Key + Print Screen, navigate to your Pictures folder and look for the Screenshots subfolder. 

Note: If you have a Microsoft OneDrive account, be sure to check your OneDrive Pictures folder for Screenshots, as it may default there instead of to your local Pictures folder (Figure A).

Figure A

How to capture the active window in Windows 10 with keyboard shortcuts

If you want to capture just the active window, you need to press Alt + Print Screen. Again, you won’t get any visual feedback indicating it was copied successfully, but you’ll be able to paste the image into whichever app you used to take a full screen screenshot.

How to capture a screen selection in Windows 10 with keyboard shortcuts

If you just want to capture a small portion of the screen you can do so by pressing Windows Key + Shift + S. When you do this you’ll see a small toolbar appear at the top of the screen with several snipping options (Figure B): The first snips a square shape by clicking and dragging, the second allows you to draw a freeform shape, the third takes a picture of the active window, and the fourth snips a copy of the full screen. 

Figure B

This is arguably the best keyboard shortcut option for taking screenshots, as it’s the only one that presents you with a thumbnail at the bottom right of the screen saying an image has been captured (Figure C). Clicking on the thumbnail will open the image in Snip & Sketch, which will be discussed more below. 

Figure C

Taking Windows 10 screenshots using Windows apps

There are two options when it comes to taking screenshots with built-in Windows applications: Snip & Sketch, which was released for Windows 10 with the October 2018 update, and the tried-and-true Windows Snipping Tool, which has been a part of Windows since Vista.

You can find both tools by typing in the Windows search bar (Figure D), or by finding it in the application list of the Start menu. 

Figure D

Snip & Sketch (Figure E) is both an image editing/markup tool and an image capture app. Clicking on New in Snip & Sketch will bring up the same menu as Windows Key + Shift + S, so think of opening it directly as an alternative to remembering that hotkey combination—it even reminds you of that when you open it.

Figure E

Next to the New button there’s an arrow that will allow you to take a snip with a time delay of three or 10 seconds. Along with delay options are basic markup functions, a crop tool, and a share option.

As for the Windows Snipping Tool, it’s essentially the same thing as Snip & Sketch with a slightly older user interface (Figure F).

Figure F

Snipping Tool even reminds you that it’s going to be phased out eventually, but it has coexisted with Snip & Sketch for a couple of years already with no indication it’s going to disappear anytime soon. 

Aside from its look, the only big differences between Snip & Sketch and Snipping Tool are the delay option, which ranges from one to five seconds in one-second increments, and the lack of image editing options contained in Snipping Tool. All you can really do with an image captured in Snipping Tool is mark it up with a pen, highlight it, and erase marks you’ve made.

With Snip & Sketch’s tight integration with Windows keyboard shortcuts it and its corresponding Windows Key + Shift + S keyboard shortcut it’s the most user friendly method of taking screenshots in Windows 10. 

MSI Releases Combo PI V2 1.0.8.1 BIOS Updates for X570, B550 and A520 series motherboards

MSI has started distributing optimized BIOS updates for the X570, B550 and A520 series motherboards. Combo PI V2 1.0.8.1 BIOS are released and able to download successively from now throughout September.

The MSI news-release is very shot but the new BIOS updates affects the following:

Optimized memory compatibility

Optimized overclocking ability of memory

Mitigate S3 resume issue

Solved RAID issue of B550 platform

Now supports UMA setting for Ryzen 4000 G-Series (Renoir) processors

The release schedule is listed below, BIOS for all 500 series motherboards will be ready in September.

Big Navi Radeon 6000 Branding Unearthed In AMD Custom Fortnite Map And How To Find It

Only a few days ago, AMD worked with Fortnite Creative Island modder MAKAMAKES to create the custom AMD Battle Arena in Fortnite creative. Players have the option of playing three different game modes, and among the maps, there apparently was a baked-in Easter egg referencing Big Navi model numbering, or at least so it seems.

Just this evening, a personality who appears to be an AMD-sponsored Facebook streamer, GinaDarling, was live playing and streaming Fortnite on this new map, when she discovered the AMD-planted Easter egg you can see below. It certainly seems like it is referencing the upcoming Big Navi lineup from AMD, as she had to enter code 6000 at one point in her trials and tribulations to get to the egg. Perhaps we can expect an announcement from AMD soon now that this has been triggered? The model numbering certainly makes sense. Logically, we have Radeon RX 5000, 5500 and 5700 series products, so where do we go from here? You’re quick, indeed: 6000.

To figure out how this Easter egg was found, you also have to get into the AMD Battle Arena. The AMD Battle Arena map is a Fortnite map where AMD fans get to duke it out in an AMD-themed stadium. Full of seats, cool lights, and a phone booth surrounding the playing floor, the arena looked pretty slick under the night sky texture. AMD posted a video about the arena which you can see below, but that phone booth is key. Take a gander here and we’ll explain more:

As the video shows, there are a few options for game play:

Free for All
First player to 30 eliminations, wins.

Unlimited respawns.

Weapons spawn randomly across the map.

Box Fights
Last player standing wins.

Eliminate all other players to win.

Four minutes per round, five rounds total

Capture the Flag
Capture the enemy flag and bring it back to your own base to win.

You only have one life per round.

Five minutes per round, three rounds total.

You must get into either a Free-For-All or Capture The Flag Mapto find the Easter egg.

Once you are in either of those types of matches, do the following steps to see the Easter egg for yourself:

Run along the highest wall directly parallel to the phone booth and jump across the border (if you need help, see below)

Answer the phone with middle mouse click

Finish the parkour

Enter code “6000” at the end terminal area.

And that is it!

With Big Navi not so far away, it does not come as a surprise that AMD is trying to garner some attention. In conjunction with this Fortnite map being released, a contest is being run as well. If a user catches a cool in-game action clip, they can share that clip on social media with the tag #AMDSweepstakesEntry for a chance to win a Maingear PC powered by AMD. This winnable PC is a little less beefy than the Maingear machine we reviewed recently, but it is still awesome nonetheless.

In any case, if you want to get into these games and find the Easter egg for yourself, you can do so by going into creative mode and entering the code 8651-9841-1639 to get to the map. If you want to see Gina’s reaction to finding this Easter egg, you can view her stream here. No matter what you do, stick around HotHardware for any upcoming news on Big Navi.